Edgeio vs. Freegeio

I sort of trashed Edgeio when it originally unveiled itself a year and a half ago. The company, founded by Keith Teare with some help from Mike Arrington and others, wanted to be a centralized clearinghouse for decentralized classified ads. Instead of posting your ad on Craigslist or listing your product on eBay, you’d put an ad on your own blog and, through the magic of RSS, it would automatically be aggregated with other people’s ads on the Edgeio site. What Edgeio was offering was an elegant solution to a problem that no one had.

Last week, Edgeio introduced a new service – a “distributed paid-content platform,” in the company’s non-memorable phrasing – that is altogether more interesting. In essence, Edgeio is providing a shopping-cart-in-a-widget that makes it easy to sell digital goods through a site. For instance, if I decided to sell the post you’re now reading for, say, $3.00, I could stick a little button right here saying, “To read this entire post, click here.” You’d click, a box would appear asking you to pay the three bucks, you’d pay the fee (right?), and then licketysplit the rest of the text would appear. You’d be a happy buyer, I’d be a happy seller, and Edgeio would also be happy because it would pocket a 20% cut of the sale price. If I wanted to sell an MP3 music file or podcast or a video stream or a pdf, I could do it in the same way.

OK, technically speaking, that’s nothing new. I could do something similar through PayPal. Except that Edgeio greatly simplifies the process, and as someone who once tried to figure out how to use PayPal to offer an in-site purchase of text, and failed miserably, I can tell you that simplification is a powerful business model.

Then again, who would pay $3 to read this? Answer: nobody (except maybe Keith Teare). But what if, instead of being a short post about Edgeio, this was a brilliant 5,000-word analysis of the future of the enterprise application market, and instead of asking three bucks for it, I gave it a price tag of $500? That, I think, is where the Edgeio service holds some promise. It’s not about mass-market micropayments; it’s about niche-market macropayments.

But where the Edgeio service gets really interesting, at least in theory, is that it builds in an affiliate program. What that means is that other people would also be able to sell this post (or a music file or a video stream or a pdf of that brilliant 5,000-word analysis) through their own sites, and they would earn a percentage of the sale price as set by me. To put it somewhat grandiosely, the Edgeio service automates the creation of a distribution network, at both the logistical and the contractual level.

But there’s one very important thing that the service lacks: the ability for sellers to aggregate diverse bits of content from various producers into a bundle and to sell the bundle rather than the individual pieces. Say, for instance, I’m a college professor who has developed an interesting new course that other professors might want to give. I could use an Edgeio-like service to collect the course readings into a single bundle that I could sell with a teaching plan. Or say I’m a master of the mix tape. I could create a bundle of songs and sell them through my site (rather than posting my most-excellent playlist on iTunes and letting Apple make all the money by selling the actual tunes). Or say I’m a post-modern magazine tycoon. People would pay me to assemble a nifty bundle of articles drawn from various sites, thus saving themselves a lot of time reading a lot of crap. When you let sellers bundle, you open the way for a lot more creativity in merchandising – and you temper some of the problems with the micropayment model.

I realize I’m letting my sentimental bias show: I still hope that there will be a way to actually sell stuff on the internet rather than having to give everything away for free, crassly plastered with ads. (Why? Because I think that the hegemony of “free” will in the long run end up narrowing our choices rather than expanding them.) Edgeio’s biggest competitor is Freegeio, and Freegeio will probably win. But, hey, it’s a nice try, and I hope Edgeio (a) adds a bundling capability to its service and (b) succeeds. And even if it doesn’t establish a context in which micropayments become attractive, the niche macropayments model may well work.

Now, aren’t you glad you didn’t have to shell out $3 to read this post?

Cheapskate.

Cheaper, better IT

From my article “Ten Tips for Reducing Burgeoning IT Costs” in the new issue of Director Magazine:

The good news is that in the wake of the Y2K scare and the bursting of the dotcom bubble, companies have grown more skeptical about IT and more conservative in their spending. Microsoft faces a much tougher sell [in pitching upgrades] this year than it did in 2001 when it rolled out Windows XP. Since then exciting new technologies have also emerged that have allowed businesses to use their existing IT equipment more effectively and avoid buying new gear. Suddenly, companies are finding they can cut their IT budgets and still have the computing capabilities they need. Smart IT management is all about getting more for less. Here are 10 ways your business can achieve that goal …

Read.

The automation of social life

William Davies has written a brief, important essay called The Cold, Cold Heart of Web 2.0 in The Register. He argues that it’s a mistake to assume that the technology-driven efficiencies we welcome in the commercial realm, as a means of reducing costs and, often, expanding choices, will also bring benefits when applied to the social or cultural realm. Society is not a market, and automation may harm it rather than enhance it.

“The first dotcom boom,” Davies writes, “was principally about putting the internet to work in increasing the efficiency of existing services.” It made activities like the purchase of books and the payment of taxes easier by automating some of their more time-consuming aspects. The main thrust of Web 1.0 was to streamline “one-to-many” services, which “feature an organisation that resembles a ‘producer’ offering something to individuals who resemble ‘consumers’, who usually have some choice about whether or not to accept it.”

Web 2.0, by contrast, “abandons this conventional one-to-many model of service provision, and sets about exploiting the many-to-many potential of the internet. Rather than using the web to connect producers to consumers, it is used to connect individuals to each other.” Computer networks have, of course, always supported many-to-many services, like bulletin boards and other social networks. What’s changed with Web 2.0, Davies writes:

is that these otherwise secluded and organic realms of social interaction are now the focus of obsessive technological innovation and commercial interest. The same technological zeal and business acumen that once was applied to improving the way we buy a book or pay our car tax is now being applied to the way we engage in social and cultural activities with others.

In short, efficiency gains are no longer being sought only in economic realms such as retail or public services, but are now being pursued in parts of our everyday lives where previously they hadn’t even been imagined. Web 2.0 promises to offer us ways of improving the processes by which we find new music, new friends, or new civic causes. The hassle of undesirable content or people is easier to cut out. We have become consumers of our own social and cultural lives.

The problem – and the danger – is that efficiency plays a very different role in the marketplace of products than it does in the realm of society and culture. “Undoubtedly there are instances where we do want our social lives to be more efficient,” write Davies. “But we should worry about this psychology seeping too far into our lives.” We do not, and should not, judge the quality of our social and cultural life by its efficiency. As Davies concludes:

The pursuit of maximum convenience in the cultural sphere risks dissolving what we value in it in the first place. Outside of the economy – and very often within the economy too – we find that the constraints and accidents of everyday life are the basis for enjoyable and meaningful activities. They don’t necessarily connect us to the people we most want to speak to or the music we most want to listen to. Sometimes they even frustrate us. But this shouldn’t lead to business process re-engineering.

In a recent blog post, the usually perceptive Clay Shirky writes, of my own work, “I have never understood Nick Carr’s objections to the cultural effects of the internet … when he talks about the effects of the net on business, he sounds more optimistic, even factoring in the wrenching transition, so why aren’t the cultural effects similar cause for optimism, even accepting the wrenching transition in those domains as well?” The real question, to me, is this: Why in the world would anyone believe that the cultural effects of the internet would be beneficial simply because the internet’s effects on business are beneficial? And yet Shirky is far from alone in making this bizarre association – it runs like a vein of fool’s gold through the writing of the Net’s rose-tinted-glasses set. They want to believe that the processes of culture-making and society-building can be automated and reengineered as if they were the processes of widget-manufacturing. As Davies eloquently explains, they’re wrong.

(This is a theme, by the way, that runs, less succinctly, through my forthcoming book The Big Switch: Our New Digital Destiny.)

UPDATE: Ian Douglas and Joshua Porter offer thoughtful rejoinders. I agree with Porter that I was mistaken to call efficiency “an intrinsic good” in markets; I have edited my original post to temper that point. I disagree, however, with Porter’s contention that there’s no difference between “markets” and “social lives.”

Growing up virtual

From my column in today’s Guardian:

Compared to Club Penguin, Second Life, the much-hyped virtual world aimed at adults, is something of a ghost town. It’s managed to attract only about 95,000 paid subscribers so far, a fraction of Club Penguin’s 700,000. In fact, all of the most popular virtual worlds are geared to kids and teenagers. The venerable Habbo Hotel, originally launched in Finland in 2000, attracts 7 million visitors a month, Sweden’s Stardoll attracts 5 million, Webkinz and Neopets attract 4 million each, and Gaia Online reports nearly 3 million monthly visitors … Clearly, there are big commercial rewards to be had by enticing children to spend a lot of time exploring virtual worlds. What’s less clear, though, is the long-term effect on the kids themselves.

Read.

What is Web 3.0?

Back in May, an intrepid interlocutor in Korea stuck a pointy stick into a semantic hornet’s nest by asking Google’s resident CEO, Eric Schmidt, an “easy question”: What is Web 3.0? After some grumbling about “marketing terms,” Schmidt obliged, saying that, to him, Web 3.0 is all about the simplification and democratization of software development, as people would begin to draw on the tools and data floating around in the Internet “cloud” to cobble together custom applications, which they would then share “virally” with friends and colleagues. Said Schmidt:

My prediction would be that Web 3.0 would ultimately be seen as applications that are pieced together [and that share] a number of characteristics: the applications are relatively small; the data is in the cloud; the applications can run on any device – PC or mobile phone; the applications are very fast and they’re very customizable; and furthermore the applications are distributed essentially virally, literally by social networks, by email. You won’t go to the store and purchase them. … That’s a very different application model than we’ve ever seen in computing … and likely to be very, very large. There’s low barriers to entry. The new generation of tools being announced today by Google and other companies make it relatively easy to do. [It] solves a lot of problems, and it works everywhere.

This is – big surprise – a vision of network computing that dovetails neatly with Google’s commercial and technological interests. Google is opposed to all proprietary applications and data stores (unless it controls them) because walled sites and applications conflict with its three overarching and interconnected goals: (1) to get people to live as much of their lives online as possible, (2) to be able to track all online activity as closely as possible, and (3) to deliver advertising connected to as much online activity as possible. (“Online” encompasses anything mediated by the Net, not just things that appear on your PC screen.) To put it a different way, all software and all data are simply complements to Google’s core business – serving advertisements – and hence Google’s interest lies in destroying all barriers, whether economic, technological, or legal, to all software and all data. Almost everything the company does, from building data centers to buying optical fiber to supporting free wi-fi to fighting copyright to supporting open source to giving software and information away free, is about removing those barriers.

In the mind of the Googleplex, the generations of the web proceed something like this:

Web 1.0: web as extension of PC hard drive

Web 2.0: web as application platform complementing PC operating system and hard drive

Web 3.0: web as universal computing grid replacing PC operating system and hard drive

Web 4.0: web as artificial intelligence complementing human race

Web 5.0: web as artificial intelligence supplanting human race

That’s fine and dandy, but there’s a little problem. Schmidt’s definition of Web 3.0 seems to conflict with the prevailing definition, which presents “Web 3.0” as a synonym for what used to be called (and sometimes still is) “the Semantic Web.” In this definition, Web 3.0 is all about creating a richer, more meaningful language for computers to use in communicating with other computers over the Net. It’s about getting machines to do a lot of the interpretive functions that currently have to be done by people, which would ultimately take automation to a whole new level.

Here are the generations of the web from the Semanticist perspective:

Web 1.0: web as people talking to machines

Web 2.0: web as people talking to people (through machines)

Web 3.0: web as machines talking to machines

Web 4.0: web as artificial intelligence complementing human race

Web 5.0: web as artificial intelligence supplanting human race

Now, it’s true that both visions end in the same sunny place, with the universal slavesourcing – sorry, I mean crowdsourcing – of human intelligence and labor by machines, but, still, the confusion about the nature of Web 3.0 is problematic. Here we are, halfway through 2007, and we still don’t have a decent commonly-held definition of Web 2.0 and already we have competing definitions of the Web’s next generation.

Or do we? I think that the apparent conflict between the two definitions may in fact be superficial, arising from the different viewpoints taken by Schmidt (an applications viewpoint) and the Semanticists (a communications viewpoint). As a public service, therefore, I will put on my Tim O’Reilly mask and offer a definition of Web 3.0 capacious enough to encompass both the traditional Semantic Web definition and Eric Schmidt’s mashups-on-steroids definition: Web 3.0 involves the disintegration of digital data and software into modular components that, through the use of simple tools, can be reintegrated into new applications or functions on the fly by either machines or people.

Stick that in your Yahoo Pipe and smoke it.

Microsoft’s forecast: cloudy

In a talk before Wall Street analysts yesterday, Microsoft’s coder-in-chief, Ray Ozzie, described in broad strokes the software giant’s plans to build its utility computing business as more and more computing functions and software applications turn into services supplied through the “cloud” of the Internet. The “services transformation,” he said, is “a very, very big deal for our company.” Ozzie made no concrete product announcements, however, offering only a vague promise that “over the course of the next 12 to 18 months, we are going to begin introducing a number of new and very key components both at the platform layer and at the app layer.”

Microsoft, Ozzie said, is constructing its utility computing business in three layers. First is the “physical layer” of data centers, which are “of massive scale.” As with Google, Microsoft is building its data centers out of huge numbers of cheap servers and other “commodity components,” both to keep costs down and to “achieve reliability through redundancy.” Over the last year, Ozzie said, Microsoft has doubled the number of servers installed in its utility plants “and we will keep investing.”

The second level is “our cloud infrastructure services layer” which forms the “utility computing fabric upon which all of our online services run.” This is essentially the operating system for the data centers, or the “cloud OS,” as it’s sometimes called. It consists of the software that manages the assembly of the overall computing capacity into discrete virtual computing machines and the automated deployment of those machines to run various web services. It also coordinates the other two crucial infrastructure elements: storage and networking.

The third layer – what Ozzie calls “the Live platform services layer” – consists of a set of shared services, such as identity management, contact databases, and advertising, that the company’s online applications will draw on. These applications, which Ozzie, evidencing a surprisingly conservative vision, says will “target individuals and very small businesses,” are “generally ad-monetized applications, and because of that, there’s synergy in sharing data and features among the apps at this level.”

Ozzie describes five separate target customers for its web services. First are consumers, who will be offered entertainment, commerce, and communication. Second are “information workers,” who will be offered collaboration tools: “Seamless Office scenarios that span the PC, the Web and even the phone. Documents that go wherever you want them, news scenarios, sharing scenarios, meeting scenarios, note-taking, presentation scenarios that use PCs for what they’re really good for: for document creation and editing and review. That use the Web for what it’s really good for: publishing and sharing and universal access.”

Third are IT staffs, whose main benefit from the shift to utility computing will be cost savings, says Ozzie: “For enterprise IT in the short term, this is mostly going to be about moving IT infrastructure to the cloud, either in whole or in part. Things like e-mail or content management, information sharing, and so on.” The fourth target customer group consists of business managers, who will gain greater speed and flexibility in deploying IT resources as applications turn into services. Finally, there are the software developers, who by drawing on the utility computing grid will be able “to run applications and store data at very, very low cost [and], for all practical purposes, with infinite capacity that’s shared with other people like themselves.”

Ozzie closed his talk with an attempt to position Microsoft as the company best suited to dominate the cloud, as it’s dominated the desktop, through a combination of “software plus services”:

We’re building a platform to support our own apps and solutions, and to support our partners’ applications and solutions, and to support enterprise solutions and enterprise infrastructure. We are the only company in the industry that has the breadth of reach from consumer to enterprises to understand and deliver and to take full advantage of the services opportunity in all of these markets. I believe we’re the only company with the platform DNA that’s necessarily to viably deliver this highly leveragable platform approach to services. And we’re certainly one of the few companies that has the financial capacity to capitalize on this sea change, this services transformation.

I remember, back when the computing industry was going through its last great sea change, with the arrival of the personal computer, IBM assumed it was the “only company in the industry” with the customer base, the capabilities, and the cash to dominate the next generation of computing. But as a small upstart named Microsoft showed Big Blue, that ain’t necessarily so. Microsoft and Ozzie have been talking a good game about cloud computing for the past two years. But we’re still waiting for the Redmond team to take the field.

Poof!

Contemplating the San Francisco power outage that blew a hole in Web 2.0 yesterday, Om Malik writes:

Whatever the reasons behind the failure might be, yesterday was a rude reminder of how fragile our digital lives are. The seemingly invincible web services (not to mention the notional wealth they signify) vanish with a blink of the eye. It was also a reminder, that all the hoopla around web services is just noise – for in the end the hardware, the plumbing, the pipes and more importantly, the power grid is the real show.

Amen. As I wrote back in April: “Web 2.0 isn’t about applications. It’s about bricks and mortar. It’s about capital assets. It’s about infrastructure.” There’s a reason Google builds scores of data centers and places them next to big dams with hydroelectric generating stations. The new grid is built on the old grid, and the old grid is fraying.

In a meeting between top tech companies and the department of energy last December, one of the attendees said, “I think we may be at the beginning of a potential energy crisis for the IT sector. It’s clearly coming.” And he wasn’t just talking about a stray power-station snafu or faulty generator. He was talking about tapping out the power grid. Literally.