What is Web 3.0?

Back in May, an intrepid interlocutor in Korea stuck a pointy stick into a semantic hornet’s nest by asking Google’s resident CEO, Eric Schmidt, an “easy question”: What is Web 3.0? After some grumbling about “marketing terms,” Schmidt obliged, saying that, to him, Web 3.0 is all about the simplification and democratization of software development, as people would begin to draw on the tools and data floating around in the Internet “cloud” to cobble together custom applications, which they would then share “virally” with friends and colleagues. Said Schmidt:

My prediction would be that Web 3.0 would ultimately be seen as applications that are pieced together [and that share] a number of characteristics: the applications are relatively small; the data is in the cloud; the applications can run on any device – PC or mobile phone; the applications are very fast and they’re very customizable; and furthermore the applications are distributed essentially virally, literally by social networks, by email. You won’t go to the store and purchase them. … That’s a very different application model than we’ve ever seen in computing … and likely to be very, very large. There’s low barriers to entry. The new generation of tools being announced today by Google and other companies make it relatively easy to do. [It] solves a lot of problems, and it works everywhere.

This is – big surprise – a vision of network computing that dovetails neatly with Google’s commercial and technological interests. Google is opposed to all proprietary applications and data stores (unless it controls them) because walled sites and applications conflict with its three overarching and interconnected goals: (1) to get people to live as much of their lives online as possible, (2) to be able to track all online activity as closely as possible, and (3) to deliver advertising connected to as much online activity as possible. (“Online” encompasses anything mediated by the Net, not just things that appear on your PC screen.) To put it a different way, all software and all data are simply complements to Google’s core business – serving advertisements – and hence Google’s interest lies in destroying all barriers, whether economic, technological, or legal, to all software and all data. Almost everything the company does, from building data centers to buying optical fiber to supporting free wi-fi to fighting copyright to supporting open source to giving software and information away free, is about removing those barriers.

In the mind of the Googleplex, the generations of the web proceed something like this:

Web 1.0: web as extension of PC hard drive

Web 2.0: web as application platform complementing PC operating system and hard drive

Web 3.0: web as universal computing grid replacing PC operating system and hard drive

Web 4.0: web as artificial intelligence complementing human race

Web 5.0: web as artificial intelligence supplanting human race

That’s fine and dandy, but there’s a little problem. Schmidt’s definition of Web 3.0 seems to conflict with the prevailing definition, which presents “Web 3.0” as a synonym for what used to be called (and sometimes still is) “the Semantic Web.” In this definition, Web 3.0 is all about creating a richer, more meaningful language for computers to use in communicating with other computers over the Net. It’s about getting machines to do a lot of the interpretive functions that currently have to be done by people, which would ultimately take automation to a whole new level.

Here are the generations of the web from the Semanticist perspective:

Web 1.0: web as people talking to machines

Web 2.0: web as people talking to people (through machines)

Web 3.0: web as machines talking to machines

Web 4.0: web as artificial intelligence complementing human race

Web 5.0: web as artificial intelligence supplanting human race

Now, it’s true that both visions end in the same sunny place, with the universal slavesourcing – sorry, I mean crowdsourcing – of human intelligence and labor by machines, but, still, the confusion about the nature of Web 3.0 is problematic. Here we are, halfway through 2007, and we still don’t have a decent commonly-held definition of Web 2.0 and already we have competing definitions of the Web’s next generation.

Or do we? I think that the apparent conflict between the two definitions may in fact be superficial, arising from the different viewpoints taken by Schmidt (an applications viewpoint) and the Semanticists (a communications viewpoint). As a public service, therefore, I will put on my Tim O’Reilly mask and offer a definition of Web 3.0 capacious enough to encompass both the traditional Semantic Web definition and Eric Schmidt’s mashups-on-steroids definition: Web 3.0 involves the disintegration of digital data and software into modular components that, through the use of simple tools, can be reintegrated into new applications or functions on the fly by either machines or people.

Stick that in your Yahoo Pipe and smoke it.

9 thoughts on “What is Web 3.0?

  1. Tom Lord

    Web 3.0 involves the disintegration of digital data and software into modular components that, through the use of simple tools, can be reintegrated into new applications or functions on the fly by either machines or people.

    See, this is why you are such a stud of a journalist. That’s right, and wonderful, and a concise summary of what others have been grunting about and pointing towards. (c.f., e.g., “mash-ups”)

    Said “disintegration […] into modular components” entails defining the “cloud” as a specific “platform”. That is to say: modularity and composability are side effects of platform standardization. The platform standardization is a market maker for commodity components — for modules that can be combined freely.

    You might want to look into stuff associated with David Patterson (yes, that, David Patterson) at the University of California, Berkeley — for one slice across the platform definition problem. The “next Google,” so to speak, should be in his view something that anyone can cook up in their basement over a few late night sessions — not mearly the “next idea of a Google-ish thing” but an actual, working, at-scale implementation.

    Fun stuff.

    -t

  2. James Urquhart

    As you noted in a previous post, computing of any kind starts with infrastructure. I would modify your definition to read:

    Web 3.0 involves the disintegration of digital data, software and infrastructure into modular components that, through the use of simple tools, can be reintegrated into new applications or functions on the fly by either machines or people.

    This takes into account the affect that utility computing will have on the rest of “Web 3.0”, including creating mobility between compute capacity vendors. Many may think of “Web anything” as a software problem, but static infrastructure is just as big a problem as static software architectures in a “mashup” world.

    I commented on this in a little more depth on my blog.

  3. Jean-Marie Le Ray

    Hi Nick,

    Excellent piece of analysis :-)

    Would you authorize me to translate it in French and publish it on my blog, which is a kind of translation lab for sharing with French spoken people?

    Many thanks,

    Jean-Marie

  4. cognominal

    Schmidt was a VP at Sun and is carrying at Google

    the vision spelled by the Sun slogan “The Network is the computer”.

  5. SallyF

    Oh Nick, “slavesourcing” is so…evil-sounding. Why not

    sucker-sourcing (see slide 11 of Prodromou’s presentation) or internet-addict-sourcing? With the selling of Web 2.0 communities like Newsvine and possibly Facebook to concerns that will obviously mine those communities for their free labor, the few on top of the pyramid scheme are the only winners. To paraphrase the gun lobby: Web 2.0 does not cheat people; people cheat people.

    On a more serious note, “The Network is the computer” is a slogan that has always had a soft place in my heart. I look forward to a semantic web. I value workgroups that collaborate and share and really do demonstrate that, with good teams and decent people, the whole is greater than the sum of its parts. But that kind of synergy is not due all that much to technology, it is due to people and their openness, maturity and benevolence. Those qualities do happen with some teams, but not often enough.

    In terms of the weed you are smoking, it is probably little different from the marketing hype of “Software Through Pictures” that companies like Aonix promoted in the 1990’s. Sort of like the “Artificial Intelligence” fiascos of the 1980’s that changed their names to “rule engine” and “expert system”. Things are better with lots of open-source components, but components still often have to be hobbled together by somebody who at least understands the interfaces between the pieces. Integration, debugging and test is rarely automagic. Web 3.0 will not fix that, but I do hope to see more mashups-on-steroids in the area of urban mapping.

Comments are closed.