At a first look, this is a sort of LazyWeb based on long tail and money:
-> Amazon Mechanical Turk (Beta) - “Artificial Artificial Intelligence”

Amazon Mechanical Turk provides a web services API for computers to integrate Artificial Artificial Intelligence directly into their processing by making requests of humans.

Developers use the Amazon Mechanical Turk web services API to submit tasks to the Amazon Mechanical Turk web site, approve completed tasks, and incorporate the answers into their software applications.
**
To the application, the transaction looks very much like any remote procedure call **- the application sends the request, and the service returns the results. In reality, a network of humans fuels this Artificial Artificial Intelligence by coming to the web site, searching for and completing tasks, and receiving payment for their work.

I am fascinating by this, and by its relationship with open source and free software methodologies…

I’ll look into deeper in the next days…

The service is
-> Amazon Mechanical Turk

work in progress

Commenta e condividi

Some thoughts about the last developments in Gnome desktop enviroment
Based on this post ( _written in italian _)…

Using Ubuntu Dapper with AIGLX support, in combination with Beagle, Google Web service, Evolution and Liferea for RSS support… it’s a very cool and usable place to work… ( this is my notebook, Benq S53 white )

Thinking of the innovation and the productity of a tool like Devon Think, its capability to show and “records” data and how it is similar to the RDF world…potential ASAIK…
I think i’m looking for… A bit more…

Let me explain: a lot of good ideas from this cool blog, about the new way of visualizing data, graphs everywhere but not only…
And with the support of others ways of navigations in the information sea…

I’m frustrating with the not enough time problem: a request for LazyWeb

Liferea reads and saves RSS and RDF or ATOM data: but in a way that it’s not simple to export…

**Why not have a simple RDF engine that captures all that data, with a simple setup and a simple way to offers data for others RDF capable applications?
**

Looking into the SIMILE project in these days, i am surprised by the good work they are making:

Thinking of Beagle++ : a very cool idea for the semantic data…

As the original Beagle, our improved Beagle++ crawls your desktop (only the directories you want it to) and indexes the content of all known files such as Office documents, Emails, images, music and video files, source code, etc. Unlike Beagle, our version also extracts metadata and it restores the semantic relations among documents and entities on your desktop. E.g. when a keyword search returns a file that you originally stored from an attachment of an Email, the search application will tell you so.
Then you can obtain further information of the context in which you recevied that file.

Relations and links between entities in our everyday life: understanding the semantic implicit data in the way WE make associations between files and data and saving them…

The directions are very interesting…

As someone told in PlanetRDF:

Semantic Web based research isn’t working

Totally agree with: the semantic landscape is changing and it’s a matter of time, and the everyday desktop usage will be totally DATA based… finally…
Applications and others desktop metaphors are old enough…
**
Linking ( as data hyperlinking ) implicit and explicit are a new keys for manage and navigating into our data…**

Google docet….

work in progress

Commenta e condividi

Some times ago i have posted something about microcontent and wikis

Now there are two innovations: a sort of blog wiki
and an intellligent use of wiki + Tiddlywiki as a comfortable way of publishing content as an aggregation of microcontent
From MicroWiki:

The MicroWiki is meant as a collaborative project. If you want do contribute or discuss the wiki’s content, please leave a comment on the MicroBlog or mail to conferenceATmicrolearning.org. Contributions will be included in the next updated version of the MicroWiki (at least every two weeks).

A Wiki is a special piece of software that makes creating and updating hypertext extremely easy and intuitive. If you have visited Wikipedia, as you should, you know that ‘classical’ wikis can be updated online on-the-fly, in principle by anyone.

The TiddlyWiki-technology this MicroWiki is using is different, because it is not a server-side wiki.
There is no database behind it: just Javascript and CSS packed into oine single HTML file
. You can change and expand your own MicroWiki (or begin a TiddlyWiki of your own on any subject), but to do this you have to download the file to your computer first (see SaveChanges).

Thanks to an extraordinary Danny Ayers bookmark, i’ve found two important things:

  • an important example of bringing up Semantic technologies and Social software to make an interesting piece of innovation in usability from the end user point of view, and not only: System one activities…

    For a start there’s seamless integration of enterprise info and authoring with real-time analysis of what you write. Although there are some familiar technologies involved as well (Wiki/blogging, syndication etc), the tech is presented in a way that from a user’s point of view, it gets out of the way and just works.
    There are capabilities like custom (semantic) form building available, but even those look designed to be maximally user-friendly.
    Probably the most notable thing about the system is that though there broad facets (context) and views (perspectives), most of the navigation is mostly relevance-based and changes in real time as you interact. Compared to some of the other knowledge management tools out there, I reckon this does deserve the epithet “groundbreaking”.

  • a link about Microlearning.org, where i’ve found some good points of interests…
Continua a leggere
Foto dell'autore

Matteo Brunati

Attivista Open Data prima, studioso di Civic Hacking e dell’importanza del ruolo delle comunità in seguito, vengo dalle scienze dell’informazione, dove ho scoperto il Software libero e l’Open Source, il Semantic Web e la filosofia che guida lo sviluppo degli standard del World Wide Web e ne sono rimasto affascinato.
Il lavoro (dal 2018 in poi) mi ha portato ad occuparmi di Legal Tech, di Cyber Security e di Compliance, ambiti fortemente connessi l’uno all’altro e decisamente sfidanti.


Compliance Specialist SpazioDati
Appassionato #CivicHackingIT


Trento