Future Earth II: Sensor webs and the Information Commons

Such a big topic, so little time…

Science journalist Declan Butler has a post up announcing a special issue of Nature, with free access to articles focusing on the future of computing. It’s a long and absorbing read, and a quick blog post like this one won’t do it justice. Declan gets to write about the “sensor web” in 2020 computing: Everything, everywhere:

These new computers would take the form of networks of sensors with data-processing and transmission facilities built in. Millions or billions of tiny computers — called ‘motes’, ‘nodes’ or ‘pods’ — would be embedded into the fabric of the real world. They would act in concert, sharing the data that each of them gathers so as to process them into meaningful digital representations of the world.

How best to represent this data meaningfully? Why, on a virtual Earth, of course. Here’s an actual working example (via Declan’s post), a project in the James Reserve, California. The KMZ file available on the site gives you access to live data from a hundred odd sensors scattered around a valley. This is a groundbreaking use of Google Earth (first I’ve seen, in any case), but it is also a taste of things to come. Not much I can add other than point to it, so go play and imagine this applied to urban environments, in industrial spaces, for search and rescue, for livestock management, for border control…

james.jpg

How might all this information be distributed? Check out this page outlining the idea of the “Information Commons” (Declan, again). The Information Commons posits a peer-to-peer approach for these myriad datapoints — interconnected stores of data that do not attempt to impose hierarchies on one another. While the idea would face challenges similar to those of P2P networks today (scaling costs, data irregularities, sourcing questions), such problems are being solved on today’s P2P networks.

Still, this would just mean that an inordinate amount of georeferenced data becomes available to the user. How to forge some kind of order from of this complexity? Well, that sounds like just the job for Google, Microsoft, Yahoo!…