For my first post, I'll start in on a topic that I find of great interest, namely "Science 2.0."
The term Web 2.0 is used widely to denote to the technologies, applications, and business models that underlie success stories such as Google, Amazon, eBay, and Flickr. Powerful services (search, maps, product information, ...) accessible via simple network protocols allow clients to construct new services via composition (aka mashups), such as Declan Butler's avian flu map. Clients gain access to a powerful new programming platform (the ensemble of available services), a trend that is arguably revolutionary in terms of its impact on just about every aspect of the computer industry. Particularly impressive is how this development is enabled by massive infrastructure spending: $1.5B in 2006 by Google alone. Presumably all paid for by advertizing.
By a very loose analogy, we may use the term "Science 2.0" to refer to new approaches to research enabled by a quasi-ubiquitous Internet and Internet-based protocols for discovering and accessing services. Pioneering communities such as astronomy have already demonstrated the potential of such approaches, via virtual observatories that provide online access to digital sky surveys and that have enabled both new discoveries and new approaches to education (it seems fun to be a kid today). The early lead of the astronomy community in this space may owe something to the fact that astronomical data is reasonably simple in structure and, as Jim Gray has observed, isn't worth anything! But other fields such as genomics and environmental sciences are not far behind.
What is exciting and empowering is not simply that data are online: after all, the Web has provided us with access to data for a while. What is new is that we now enough uniformity in access protocols, and sufficient server-side computing power, to support access not by people but by programs. Thus, we see an explosion in data access, as scientists write programs that process large quantities of data automatically. Increasingly, scientists are also publishing useful programs as services: service catalogs that list available services both document and encourage the resulting rapid expansion in the scope and power of computational tools.
Science 2.0 raises many challenging methodological, sociological, and technical issues. How much trust can we place in remote services, and how do we validate (and document) a result based on such services? How do we motivate people to build such services, and how do we ensure that they are maintained? How do we build out the increasingly substantial IT infrastructure that will be needed to support thousands of users? (Unfortunately we can't rely on advertizing ...)
I've explored some of these questions in some recent talks and papers, e.g., my keynote at the 2006 Geoinformatics Conference, and in a 2005 article in Science magazine, "Service-Oriented Science." I'll also be exploring other aspects of Science 2.0 in future posts.
In the field of medicine and biomedical research we are equally fascinated by the prospect of Science 2.0 and we are going to host a conference this year (Sept 4-5th in Toronto) called Medicine 2.0 (http://www.medicine20congress.com), which will explore some of these aspects.
See http://gunther-eysenbach.blogspot.com/2008/03/medicine-20-congress-website-launched.html for considerations about the scope of Medicine 2.0, which will include Science 2.0
Posted by: Gunther Eysenbach | March 09, 2008 at 05:15 AM