During April I’ve written an article for Nodalities ( the magazine on semantic web and linked data made by Talis Corporation ), the topic was the story of Open Data movement in Italy and how we are looking for more support from international context. We were worried about “Save the Data Campaign”. Here the article:
In the last month so much written ideas and talks about Content Negotation and URI, information spaces and so on dilemmas, in the Semantic Web areas…
And what about the abstractions which are on the Web.?..
So, during the first setting of Fullout site ( as soon as possible the first coherent version online, also in english ), i’ll take the chance of studying better htaccess, content negotiation issue and the language dilemma of web pages…
With the large amount of editors, of programs which make our perception of the Web platform so distant and unclear sometimes… I’m convinced that it’s time to take a look better to HTTP and to the world we all thinking to know so clearly.
Maybe it’s not so clear.
The principles i’m seeing in all these things is SIMPLICITY and the LEAST POWER one…
It’s THE MOMENT to re-think how building sites and putting data on the web, towards the Web of Data .)
Totally agreed with this post:
-> Web design 2.0 – it’s all about the resource and its URL
Site owners effectively thought of their sites as silos – a self contained object, a web of pages, with a handful of doors (links) in and out – well even if they didn’t think of them as silos they sure treated them as such. But as Tom Coates puts it Web 2.0 is about moving from a “web of pages to a web of data“:
A web of data sources, services for exploring and manipulating data, and ways that users can connect them together.
This has some important implications for the design of web sites. Users expect to be able to navigate directly from resource to resource. From concept to concept.
I also noticed one more thing: we concentrate us on which CMS using for a project, but not to make it TRANSPARENT to the user, or in other words, we make the technology of the backend of a web site CLEAR and EXPOSED to the user….
It’s not so right.
We must change this way of doing things.
Let’s changing it.
Starting with one of the core principles of the Web: URI don’t change.
What to leave out
[ from URI ]
Everything! After the creation date, putting any information in the name is asking for trouble one way or another.
- Authors name- authorship can change with new versions. People quit organizations and hand things on.
- Subject. This is tricky. It always looks good at the time but changes surprisingly fast. I discuss this more below.
- Status- directories like “old” and “draft” and so on, not to mention “latest” and “cool” appear all over file systems.
- Documents change status – or there would be no point in producing drafts. The latest version of a document needs a persistent identifier whatever its status is. Keep the status out of the name.
- Access. At W3C we divide the site into “Team access”, “Member access” and “Public access”. It sounds good, but of course documents start off as team ideas, are discussed with members, and then go public. A shame indeed if every time some document is opened to wider discussion all the old links to it fail! We are switching to a simple date code now.
- File name extension. This is a very common one. “cgi”, even “.html” is something which will change. You may not be using HTML for that page in 20 years time, but you might want today’s links to it to still be valid. The canonical way of making links to the W3C site doesn’t use the extension.(how?)
- Software mechanisms. Look for “cgi”, “exec” and other give-away “look what software we are using” bits in URIs. Anyone want to commit to using perl cgi scripts all their lives? Nope? Cut out the .pl. Read the server manual on how to do it.
- Disk name – gimme a break! But I’ve seen it.
Re-learning the Web at its principles…
Towards a new way of being IN the Web .)
[ this post was published on wednesday, but it disappeared on the evening, for unknown reasons.... i'm reposting it with the same ideas.... ]
Following some old thoughts…
-> My personal ideal home network
The LinkSys NAS200 was arrived in these days .)
In the next days, i’ll test it…
With some photos, of course.
Now i’m formatting the two 500GB hard disks ( two Samsung HD501LJ, take a look at Tom’s Hardware List, they are in a nice position )…
I have just configured as RAID1 without using the installation procedure, but as Ryan says, using the IP and the HTTP admin interface and NOT the EXE setup program [ having Ubuntu and MacOsX ], so easy .)
Like the NAS200’s packaging, the CDROM was neatly branded. I didn’t have high hopes for running the Setup Wizard, but I did give it the college try. From the command line, I navigated to the CD and typed “wine Setup.exe” and cringed as several error messages appeared in my terminal. I didn’t bother going any further with this. Knowing the NAS200 would be assigned an IP address via DHCP from my router, I launched Firefox and navigated to http://192.168.1.102. I felt a small measure of relief watching the NAS200’s administrative page load within my browser. This feeling was soon found fleeting as I attempted to log in. The default username and password listed in the manual did not work. A few curses later, I remembered the default login (admin / admin) used by my Linksys router and gave that a try. It worked.
Obviously, my IP is not this one .)
But one thing strange, when i powered it…
Dear Ryan! Thanks for your test! I am looking for a reliable and yet handy NAS. My impression is, that this field ist relatively new and therefore there are many products that are not fully developed.. let’s put it that way. However, the first impression of the LinkSys NAS200 was very good. (Open source firmware, feature set, etc.) There is one thing that kept me busy all night: After set-up as RAID1(all new, no existing data on any drive, etc.) and formatting the NAS200 starts a rebuild. See http://members.kabsi.at/losti/screen.JPG for a screenshot. The disc LEDs on the front panel flash alternatly. This makes no sense and can’t be abandoned etc. There IS no data to rebuild, so why?? The drives (ST3500630AS) get EXTEMLY hot and it would take a long time for this rebuild to complete (~5 mins per %). No RAID system (e.g. Promise TX2300) used before ever showed such behaviour. Did you experience this rebuild after set-up too or do I have defective unit? Your feedback (or of anyone else) would be highly apprechiated! Andi
I have the same problem as told by this comment, by the way it works fine now…
My disks weren’t so hot, but i don’t understand why rebuild RAID1 if there isn’t nothing to rebuild… Mah…
Anyway, now it works fine…
Main purpose: backup my data without any crap or configuration accross my LAN…
With the power of the Net .)
And the scalability of the Moore law in the Disk area… ( changing disks without changing the unit )
Testing with also an idea of RDF storage, maybe .)
Others opinions and impressions on it in the next week
One image to tell the vision, no time to explain why…
Not completed yet..
But the principle behind this one is the Net itself,
end-to-end logic, where complexity is at the edge, not in the medium
Made with thinkature…