Back in May, an intrepid interlocutor in Korea stuck a pointy stick into a semantic hornet’s nest by asking Google’s resident CEO, Eric Schmidt, an “easy question”: What is Web 3.0? After some grumbling about “marketing terms,” Schmidt obliged, saying that, to him, Web 3.0 is all about the simplification and democratization of software development, as people would begin to draw on the tools and data floating around in the Internet “cloud” to cobble together custom applications, which they would then share “virally” with friends and colleagues. Said Schmidt:
My prediction would be that Web 3.0 would ultimately be seen as applications that are pieced together [and that share] a number of characteristics: the applications are relatively small; the data is in the cloud; the applications can run on any device – PC or mobile phone; the applications are very fast and they’re very customizable; and furthermore the applications are distributed essentially virally, literally by social networks, by email. You won’t go to the store and purchase them. … That’s a very different application model than we’ve ever seen in computing … and likely to be very, very large. There’s low barriers to entry. The new generation of tools being announced today by Google and other companies make it relatively easy to do. [It] solves a lot of problems, and it works everywhere.
This is – big surprise – a vision of network computing that dovetails neatly with Google’s commercial and technological interests. Google is opposed to all proprietary applications and data stores (unless it controls them) because walled sites and applications conflict with its three overarching and interconnected goals: (1) to get people to live as much of their lives online as possible, (2) to be able to track all online activity as closely as possible, and (3) to deliver advertising connected to as much online activity as possible. (“Online” encompasses anything mediated by the Net, not just things that appear on your PC screen.) To put it a different way, all software and all data are simply complements to Google’s core business – serving advertisements – and hence Google’s interest lies in destroying all barriers, whether economic, technological, or legal, to all software and all data. Almost everything the company does, from building data centers to buying optical fiber to supporting free wi-fi to fighting copyright to supporting open source to giving software and information away free, is about removing those barriers.
In the mind of the Googleplex, the generations of the web proceed something like this:
Web 1.0: web as extension of PC hard drive
Web 2.0: web as application platform complementing PC operating system and hard drive
Web 3.0: web as universal computing grid replacing PC operating system and hard drive
Web 4.0: web as artificial intelligence complementing human race
Web 5.0: web as artificial intelligence supplanting human race
That’s fine and dandy, but there’s a little problem. Schmidt’s definition of Web 3.0 seems to conflict with the prevailing definition, which presents “Web 3.0” as a synonym for what used to be called (and sometimes still is) “the Semantic Web.” In this definition, Web 3.0 is all about creating a richer, more meaningful language for computers to use in communicating with other computers over the Net. It’s about getting machines to do a lot of the interpretive functions that currently have to be done by people, which would ultimately take automation to a whole new level.
Here are the generations of the web from the Semanticist perspective:
Web 1.0: web as people talking to machines
Web 2.0: web as people talking to people (through machines)
Web 3.0: web as machines talking to machines
Web 4.0: web as artificial intelligence complementing human race
Web 5.0: web as artificial intelligence supplanting human race
Now, it’s true that both visions end in the same sunny place, with the universal slavesourcing – sorry, I mean crowdsourcing – of human intelligence and labor by machines, but, still, the confusion about the nature of Web 3.0 is problematic. Here we are, halfway through 2007, and we still don’t have a decent commonly-held definition of Web 2.0 and already we have competing definitions of the Web’s next generation.
Or do we? I think that the apparent conflict between the two definitions may in fact be superficial, arising from the different viewpoints taken by Schmidt (an applications viewpoint) and the Semanticists (a communications viewpoint). As a public service, therefore, I will put on my Tim O’Reilly mask and offer a definition of Web 3.0 capacious enough to encompass both the traditional Semantic Web definition and Eric Schmidt’s mashups-on-steroids definition: Web 3.0 involves the disintegration of digital data and software into modular components that, through the use of simple tools, can be reintegrated into new applications or functions on the fly by either machines or people.
Stick that in your Yahoo Pipe and smoke it.