Welcome Web 3.0!

Web 2.0 is so over. First came the tepid reviews of the third annual 2.0 boondoggle. “If you were looking to learn something new,” sniffed GigaOm’s Liz Gannes, “this week’s Web 2.0 Summit was not the place to be.” Wrote a jaded Scott Karp, “there were few revelations, few moments where you had the exhilarating experience of seeing something that was about to change the world. Every conversation I had began with discussing the underwhelming nature of Web 2.0.” “I didn’t come away from the conference having learned much,” confessed Richard MacManus, who felt the highlight of the event “was seeing Lou Reed play live.” It was Lou himself, though, who put it most bluntly, telling the Web 2.0ers, “You got 20 minutes.”

But the nail in the coffin comes in tomorrow’s New York Times, which features a big article by John Markoff on – yes! – Web 3.0. Formerly known as the semantic web, but now rebranded for mass consumption, Web 3.0 promises yet another Internet revolution. It would, Markoff writes, “provide the foundation for systems that can reason in a human fashion … In its current state, the Web is often described as being in the Lego phase, with all of its different parts capable of connecting to one another. Those who envision the next phase, Web 3.0, see it as an era when machines will start to do seemingly intelligent things.”

Personally, I’m overjoyed that Web 3.0 is coming. When dogcrap 2.0 sites like PayPerPost and ReviewMe start getting a lot of attention, you know you’re seeing the butt end of a movement. (There’s a horrible metaphor trying to get out of that last sentence, but please ignore it.) Besides, the arrival of 3.0 kind of justifies the whole 2.0 ethos. After all, 2.0 was about escaping the old, slow upgrade cycle and moving into an age of quick, seamless rollouts of new feature sets. If we can speed up software generations, why not speed up entire web generations? It doesn’t matter if 3.0 is still in beta – that makes it all the better, in fact.

But, seriously, Markoff’s piece is a thought-provoking one. As he describes it, Web 3.0 will be about mining “meaning,” rather than just data, from the web by using software to discover associations among far-flung bits of information:

the Holy Grail for developers of the semantic Web is to build a system that can give a reasonable and complete response to a simple question like: “I’m looking for a warm place to vacation and I have a budget of $3,000. Oh, and I have an 11-year-old child.” Under today’s system, such a query can lead to hours of sifting — through lists of flights, hotel, car rentals — and the options are often at odds with one another. Under Web 3.0, the same search would ideally call up a complete vacation package that was planned as meticulously as if it had been assembled by a human travel agent.

Web 3.0 thus promises to be much more useful than 2.0 (not to mention 1.0) and to render today’s search engines more or less obsolete. But there’s also a creepy side to 3.0, which Markoff only hints at. While it will be easy for you to mine meaning about vacations and other stuff, it will also be easy for others to mine meaning about you. In fact, Web 3.0 promises to give marketers, among others, an uncanny ability to identify, understand and manipulate us – without our knowledge or awareness. If you’d like a preview, watch Dan Frankowski’s presentation You Are What You Say and Oren Etzioni’s presentation All I Really Need to Know I Learned from Google, and then connect the dots. (Thanks to Greg Linden for those links.)

Markoff quotes artificial-intelligence-promoter Danny Hillis, who calls Web 3.0 technologies “spooky.” If Danny Hillis thinks they’re spooky, they’re spooky. But I’m looking on the bright side: At least I’ll have more material for the old blog.

One last thing: I’m claiming the trademarks on Web 3.0 Conference, Web 3.0 Summit, Web 3.0 Camp, Web 3.0 Uncamp, and Web 3.0 Olde Tyme Hoedown.

18 thoughts on “Welcome Web 3.0!

  1. Peter Evans-Greenwood

    The semantic web (and the whole ontology management thing) has been lurking in the shadows for a long time. I expect it will continue to lurk until someone finds a way to automagically mark up all the content that’s out there, and no one seems to have made much progress on this problem for quite some time. Otherwise no one will both to take the time marking up all their content unless they have a strong economic interest to undertaking what is likely to be an expensive manual task. And if it’s not marked it doesn’t exist in the semantic web.

    If we want to look for a 3.0, then it’s much more interesting to consider a shift from simply supporting users or actual augmentation. People are starting to do this today, and it doesn’t involve a magic markup fairy.

  2. Danny Ayers

    Hmm, well yes, I’m not sure about saying Web 2.0 is here, but the dog-butt analogies ring true, especially since we got a puppy a few weeks ago…

    I’m personally skeptical about in being feasible to get a decent amount of common sense in a useful, computer-comprehensible form. Cyc have been working on that for years, and they have encoded a huge amount of information, yet there still isn’t a killer app at the end of it.

    But on the other hand, I wouldn’t be so quick to dismiss the potential for leveraging data on the web, even if you aren’t explicitly “mining meaning”. A lot of useful, quasi-meaning can be found in the connections between pieces of information, and the (Semantic) Web is particularly good at linkage.

    On Peter’s point – “if it’s not marked [up] it doesn’t exist in the semantic web” – I’m afraid he’s slipped into a common fallacy. Sure, to be able to communicate the material needs to be expressed in a form that machines can work with, i.e. markup. But that doesn’t mean the material has to be manually marked up.

    Is every single page you see at Amazon written by hand? There’s a massive amount of information already in machine-friendly form, tucked away in databases around the world. A lot of this is already exposed as HTML. But HTML is designed for human legibility, you have to scrape to get it back into a machine friendly form. But if you’re publishing HTML in this fashion, it’s no harder to publish data in a form that’s more convenient for computers. With RDF and the associated Semantic Web technologies there are the languages and tools for joining together all the databases in a useful fashion. We already have the tech to answer the kind of “Holy Grail” question quoted above, it’s only a matter of time before there are significant deployments on the Web (pick a number).

  3. Don Park

    Well, I think we think too much. We point to a mirage far off and say that where we should go. We point to a dust storm and gives it a name because we think it’s meaningful. Oh, there is another dust storm. Why don’t we call it Dust 3.0? Meanwhile, my son uses YouTube 1-2 hours a day looking for new cartoons and funny videos. To him, ads are just a sign saying “please wait”. Does he care if it’s AJAX or not? Give me a break. Like I said, I think we think too much.

  4. Chris_B

    Lou Reed doing 4 or 5 good songs with that backing band is far cooler than whatever Snake Oil 2.0 shill routine could have been going on.

    BTW, theres no need to kill the “global brain” it was and will continue to be stillborn.

  5. Rod Boothby

    Maybe it’s like third iteration of a conference is like the 3rd generation of a rich family. The first generation has the work ethic and the vision, and therefore makes the money. The second generation lacks the vision, but has the work ethic. The third generation has neither, and only knows how to party.

    The recent Office 2.0 conference felt like it had a purpose: to bring the read/write Internet behind the enterprise firewall.

    One thing comes to mind with this assumption that Web 3.0 is going to be all machine based and semantic web driven. Is the semantic web really going to happen? Or, are the same things going to be achieved with DIY tools such as QEDWiki and Teqlo and lots of microformats.

    Is Web 3.0 the hive mind mechanical Turk built on the human driven read/write Internet, or a spooky web for machines, with everything you do and write being wrapped in a constricting swaddle of meta data?

  6. Peter Evans-Greenwood

    Regarding Danny’s comments on automated mark up and a more modest approach to leveraging the technology…

    I agree that some interesting solutions are technically possible with current semantic web toolkit. However, if the data and technology are there, and have existed for a while, then why hasn’t it happened already? While we might have a solution that works in the lab, we don’t seem to have one that works for industry in general.

    This is a bit like expert system / rules engines divide. The same technology underlies both but it’s only now–over 20 years after expert systems and the AI winter–that rules engines are getting significant traction. The IT environment and our approach to the realising the technology needed to change for it to see wide spread adoption, and these changes were a long time coming.

    Semantic web is in a similar position. Today, industry seems to consider cost of retooling to support the semantic web not to be worth the effort. Perhaps the perceived benefits of the semantic web don’t justify the investment required, or maybe there are other problems to solve which have a bigger payoff. Semantic web solves a problem, but it doesn’t appear to be a problem that industry is interested in. Unless we see a radical innovation that suddenly changes these economics we can expect the semantic web to remain on the outer for some time yet. I’d pick a larger number, rather than a small one.

  7. Yihong Ding

    With a little bit hesitate, I’d like to disagree assigning the term Web 3.0 for the Semantic Web. Indeed, we can see that philosophically the Semantic Web and Web 2.0 go two different directions if we count for the phenomenon of web evolution. While Web 2.0 emphasizes the enhanced human communications on the Web, Semantic Web research tends to add more machine intelligence into the Web. So if we assign the term x.0 for the enforcement of the human side of the Web, the Semantic Web is not in this category since it focuses on the machine side of the problem.

    I believe that the most recent intiative of the Web Science may have answered this question even better. When we start to address the Web being a natural (means independent to the participation of humans) creature, it begins to have two properties: (1) the property of itself (which the Semantic Web wants to address), and (2) the property of its collabration with humans (which the Web 2.0, and possibly later on Web x.0, addresses).

  8. pauldwaite

    I’m with Mr Evans-Greenwood. This is just the “semantic web”, which I think only ever sounded credible because enough people didn’t realise “semantic” just means “meaningful”.

    To create meaningful data (thus enabling the kind of services the New York Times seems to think are ahead), you either get a human to do it, or a machine.

    Humans are good at it, but (as I think Tim Bray said on this very topic) they lie, make mistakes, and are lazy. So, they”ll spam you, get the hotel’s address wrong, or just not bother.

    People are already bothering. Although the NYT classes “mash-ups” as Web 2.0, Google Maps illustrated with Flickr photos are only possible because of the meaningful GPS data people (or their cameras) have put into the photos. But apparently, they’re not bothering enough for the kind of services the NYT thinks are in the future. What will change that?

    As for computers divining this meaningful data from what’s already out there all by themselves, if anyone had actually achieved this, the world would have already changed. If computers don’t have the smarts to stop spam by themselves, they’ve got no chance of answering the kind of human-language questions suggested by the NYT.

    So, we’re back to humans. Humans can already add loads of meaning to data on the web. There are no barriers to this other than the humans themselves. If we’re not already doing it, what’s holding us back, and why will it go away?

  9. Chris_B

    At least part of the problem here is that “meaning” doesnt really have an agreed upon meaning and thus I remain as skeptical of the WebAI as I remain of the Hive Mind.

  10. Danny Ayers

    Peter, thanks for the response. To a large extent I agree with your points, but would lean towards different conclusions. I’m optimistic it’s not going to be shirtsleeves to shirtsleeves in three generations :-)

    “Why hasn’t it happened already?” is a fair question. I’d suggest that it is happening. The core specs were pinned down in a usable form in 2004 (RDF & OWL, there are still loose ends around SPARQL but it is usable now), and the toolkit implementations are remarkably mature given that timescale. The bottleneck has probably been developer awareness, but that has grown significantly even just in the last year. This compares favourably with e.g. the time it took for DHTML tech to get deployed as Web 2.0 Ajax.

    Your mention of expert systems/rules engines is interesting, given the big crossover between them and Semantic Web technologies. But there’s an aspect worth bearing in mind summed up in a nice quote from someone (? I forget exactly), something like “what’s new about the Semantic Web isn’t the Semantic, it’s the Web”.

    Re. “Today, industry seems to consider cost of retooling to support the semantic web not to be worth the effort.” – that’s debatable from two angles. Some sections of industry *are* retooling (e.g. the Oracle DB now supports RDF), but in many cases significant retooling isn’t actually necessary. An example there is D2RQ, a setup that you hook up to an existing SQL DB, and with a little bit of configuration it gives you an RDF view of your data.

    This is a mighty robust line: “Semantic web solves a problem, but it doesn’t appear to be a problem that industry is interested in.” On that I’d suggest that industry is very much interested in the key problem – systematic integration of data that can be as diverse as the material on the web. But it goes back to awareness, it isn’t widely known that many of the hard parts of this problem have already been solved.

    I have to pick up on Paul’s comment re. “meaning”. The word “semantic” is heavily overloaded, and in the context of the Semantic Web it’s easy to read things into it that doesn’t necessarily apply. The data that say amazon.com has behind its site is meaningful enough to provide a useful (and profitable) service. The point is well made that the Google Maps + Flickr mashup takes human effort to gather and collate the data. But in other domains people already gather and collate data as a matter of course – address books, product inventories, music playlists, project plans and management tools… What the Semantic Web technologies offer is a way of maximising the utility of this data through integration and reuse. Joining the dots. Thanks to the Web, that’s possible on a global scale.

  11. pitsch

    the semantic web breathes too much strong A.I., it tends to overlook the flaws of technocratical rationalism in trying to build a “general problem solver”. the web is build on top of messy, running code. the future of the web as an bureacratized industrial service industry, has to be measured against an equal degree of freedom. bureaucracy and infrastructure are NOT sexy. with google as the atomic kitchen of today, we’re beginning to miss some new kind of rock’n’roll radio…

  12. Webtronaut Innovations

    New company set to roll out its arsenal of bulletproof WEB 3.0 startups!

    Revolutionalize the Web? I guess we’ll just have to wait and watch. Just a heads up for investors who want to get an early start at riding the next wave.

    Apparently, their startups are being used by a SELECT group of beta testers. Keep your ears to the grindstone fellas!

    http://www.webtronaut.com

Comments are closed.