Monthly Archives: November 2005

Consumer utilities

Washington Post business columnist Steven Pearlstein writes about my ideas on utility computing in today’s edition. He notes:

It was only 20 years ago when everyone was sure that computing would become increasingly decentralized – out with the old mainframe and in with the personal computer, which would become ever more powerful with each generation of computer chip. Now, however, the swing to centralization is driven by the new economics of the Internet and dirt-cheap communication, and technological advances that make it easier for different programs and operating systems to work with each other and allow large numbers of servers and disk drives to effectively act as one big computer.

It’s interesting to watch how discussions of the utility computing model are broadening to a general audience. When I was writing my MIT Sloan Management Review article The End of Corporate Computing about a year ago (Pearlstein focuses on that piece), I assumed that discussions of utility computing would mainly concentrate on its business applications and that interest in consumer applications would develop more slowly. Since then, though, the hype about Web 2.0 (which is in one sense a code word for utility computing on the consumer side) has overturned that assumption. At the moment it’s the central services provided to the general marketplace by Google, Yahoo, Six Apart et al. that are shaping our understanding of and our conversations about the utility model, even more so than the purely business applications.

The consumer side also provides, at the moment, the greatest insights into how centralization (e.g., iTunes Music Store) and decentralization (e.g., iPod) can happen simultaneously and symbiotically.

Not like breathing

In the current Business Week, Steve Hamm writes about the state of what’s come to be known as “autonomic computing.” The term was coined a few years back by IBM’s Paul Horn, in a paper called Autonomic Computing Manifesto, and Hamm does a good job of distilling Horn’s idea:

Scientists needed to come up with a new generation of computers, networks, and storage devices that would look after themselves. The name for his manifesto came from a medical term, the autonomic nervous system. The ANS automatically fine-tunes how various organs of the body function, making your heart beat faster, for instance, when you’re exercising or stressed. In the tech realm, the concept was that computers should monitor themselves, diagnose problems, ward off viruses, even heal themselves. Computers needed to be smarter. But this wasn’t about machines thinking like people. It was about machines thinking for themselves.

I contribute a brief quote to Hamm’s article, though I have to admit to being a little ambivalent about “autonomic computing.” It’s not that I have any problem with the concept of simplifying and automating many of the basic computer operations that today tend to be highly complicated, requiring a lot of manual intervention by people. That shift is necessary and, I feel, inevitable. What bothers me is the term “autonomic computing” itself. It’s a bad metaphor.

The real power of the idea is not that computers will run themselves, in the way that the autonomic nervous system runs itself. Rather, it’s that, by automating many of the lower-level computing chores, like allocating computing, storage, and network capacity, setting up new applications, metering usage, and so on, people actually gain greater control over the systems. We become able to program the way the systems work at a higher level, establishing the criteria, for instance, that determines how different computing jobs get prioritized based on our company’s business needs.

We don’t want computer systems to breathe by themselves, in other words. We want to be able to tell them exactly how we want them to breathe, to be able to set and adjust their “heart beat” to suit our own requirements. Automating computing is – or should be – all about giving people, not machines, greater control.

Kill all screensavers

Combine an overabundance of computing power with the natural inclination of corporate functionaries to launch useless “initiatives,” and you’ve got a toxic recipe.

Case in point: the company screensaver. Yes, I’m serious.

I was talking yesterday with the CIO of a pharmaceuticals firm. We were discussing grid computing’s potential for supporting the heavy-duty number crunching required in modern drug development. He said that while grids were theoretically attractive as a cheap means of harnessing lots of processing power, he faced a big roadblock: his company’s official screensaver. It turns out that the corporate communications department created an elaborate screensaver, complete with video clips featuring the CEO, to promulgate a “corporate values” program. Installed on all the company’s PCs, the screensaver sucks up the processing cycles that might otherwise be put to a productive use – like finding a cure for cancer.

Isolated problem? Apparently not. A second CIO, overhearing our conversation, said that his company, too, had a screensaver problem. The human resources department had put together a similarly graphics-intensive screensaver that was running on all the company’s PCs. By preventing monitors and processors from going to sleep, it was sucking up a ton of electricity. He also mentioned that a recent problem with sluggish server performance had been traced to geeky screensavers being run in the corporate data center.

I did some quick research on the electricity issue. A PC with a screensaver going can use well over 100 watts of power, compared with only about 10 watts in sleep mode. An analysis by the University of New Hampshire indicates that if an organization has 5,000 PCs that run screensavers 20 hours a week, the annual power consumed by those screensavers “accounts for emissions of 750,000 pounds of carbon dioxide, 5,858 pounds of sulfur oxide, and 1,544 pounds of nitrogen oxide.” Considering that there are something like 600 million PCs in use today – and that it’s not unusual for people to leave screensavers running all night – we’re talking some big, ugly numbers.

So turn off those damn screensavers. The life you save may be your own.

Hypermediation 2.0

Once upon a time – January 2000, to be exact – I wrote an article for the Harvard Business Review called Hypermediation: Commerce as Clickstream. In the early days of the commercial internet, it was generally assumed that the web was a force for disintermediation, that it would allow producers and consumers to connect directly, killing off middlemen along the way. I suggested that this view had it wrong: that while some traditional intermediaries were being cut out of the picture, myriad new ones were arising in their place:

Far from experiencing disintermediation, business is undergoing precisely the opposite phenomenon – what I’ll call hypermediation. Transactions over the web, even very small ones, routinely involve all sorts of intermediaries, not just the familiar wholesalers and retailers, but content providers, affiliate sites, search engines, portals, Internet service providers, software makers, and many other entities that haven’t even been named yet. And it’s these middlemen that are positioned to capture most of the profits.

The hypermediation phenomenon is continuing in the Web 2.0 world of online media. We’re seeing the emergence of another new set of diverse intermediaries focused on content rather than commerce: blog subscription services like Bloglines, headline aggregators like Memeorandum, blog search engines like Technorati, ping servers like Weblogs.com, community platforms like MySpace and TagWorld, tag aggregators like tRuTag, podcast distributors like iTunes, and of course blogs of blogs like Boing Boing. (Many of the most popular blogs in fact play more of a content-mediation role than a content-generation one.) Despite the again common feeling that the web is a force for disintermediation in media, connecting content providers and consumers directly, the reality is that the internet continues to be a rich platform for intermediation strategies, and it’s the intermediaries who stand to skim up most of the profits to be made from Web 2.0.

As I wrote in 2000, the economic power of online intermediation flows from two very simple characteristics of the internet:

First is the sheer volume of activity. People make billions of clicks on the web every day, and because each click represents a personal choice, each also entails the delivery of value and thus an opportunity to make money. A penny isn’t a lot of money in itself, but when you start gathering millions or billions of them, you’ve got a business.

The second characteristic is efficiency. Most physical businesses wouldn’t be able to make money on penny transactions; it would cost them more than a penny to collect a penny. But the incremental cost of an on-line transaction is basically zero. It doesn’t cost anything to execute a line or two of code once the code’s been written. The pennies taken in by many intermediaries are almost pure profit.

It’s no coincidence that the most profitable internet businesses – eBay, Google, Yahoo – play intermediary roles. They’ve realized that, when it comes to making money on the web, what matters is not controlling the ultimate exchange (of products or content or whatever) but controlling the clicks along the way. That’s become even more true as advertising clickthroughs have become the main engine of online profits. Who controls the most clicks wins.

A blogosphere thanksgiving

Dan Farber yesterday responded to my response to his post extolling the virtues of the blogosphere. I’m now going to respond to his response to my response to his post. I’m not sure whether this back-and-forth supports my thesis about blogging or shreds it into little pieces, but it seems like a good way to sign off before the Thanksgiving holiday.

This year, by the way, my family has decided to avoid all the nuisance involved in putting together a big meal in the meatspace and instead indulge in a digital simulation of the feast through the new Google Holidays (beta) service delivered through the Google Brain Plug-In (beta). What’s great about the service is that, because it’s supported by ads, you can enjoy your virtual turkey with all the trimmings while at the same time getting a head start on your Christmas shopping. Thanks, Sergey!

Where was I? Oh, yeah: blogging. Farber notes that:

Nick’s critique of blogging is really ironic. He started blogging in April and has now become part of what he calls the fantasy community of isolated egos. Clearly, the blogosphere is not as collegial or knowable as the Harvard campus.

I’ve been struggling with that irony as well. For the time being, at least, I’m going to revel in it rather than resist it. As to the alleged merits of the Harvard campus, I haven’t been there in a couple of years and have no plans to return.

Farber goes on to sum up my motivations as a blogger:

Instead of writing longer articles and waiting months for them to appear in print, or just emailing with his colleagues, [Nick] can offer and receive near instantaneous feedback, which, by the way, is all fodder for going ‘deeper’ and creating end (some revenue-generating) products, such as books, articles and speeches.

I’m not sure about the fodder point – so far, the blog stuff and the other stuff haven’t melded much, and the time given to the former has detracted from that given to the latter – but he’s right that instantaneous, self-controlled publishing is awfully seductive. Web 2.0 is kind of the apotheosis of the vanity press. But that seductiveness is, I’d argue, part of the problem. It’s so easy and cheap to circulate in the blogosphere, or the broader webosphere, that we, as a society, will inevitably tend to spend more and more time there – a trend, it’s important to remember, that Google, Yahoo, et al., have enormous economic incentives to propel. Slowly but steadily, the internet comes to mediate the way we take in and disgorge information, ultimately influencing, even reshaping, the very way our minds work. I really think that guy Richard Foreman was on to something when he wrote:

I come from a tradition of Western culture in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality – a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West …

But today, I see within us all (myself included) the replacement of complex inner density with a new kind of self-evolving under the pressure of information overload and the technology of the “instantly available”. A new self that needs to contain less and less of an inner repertory of dense cultural inheritance – as we all become “pancake people” – spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.

That’s what scares me.

Jellybeans for breakfast

When my daughter was a little girl, one of her favorite books was Jellybeans for Breakfast. (Holy crap. I just checked Amazon, and used copies are going for hundreds of bucks!) It’s the story of a couple of cute tykes who fantasize about all the fun stuff they’d do if they were free from their parents and their teachers and all the usual everyday constraints. They’d ride their bikes to the moon. They’d go barefoot all the time. They’d live in a treehouse in the woods. And they’d eat jellybeans for breakfast.

Yesterday, Dan Farber wrote a stirring defense of blogging, illustrated by a picture of a statue of Socrates. “For the most part,” he said, “self assembling communities of bloggers hold a kind of virtual Socratic court, sorting out the issues of the day in a public forum, open to anyone, including spammers.” After discussing some technologies for organizing the blogosphere, he concluded:

For a journalist, technologist, politician or anyone with a pulse and who doesn’t know everything, blogs matter. Every morning I can wake up to lots of IQ ruminating, fulminating, arguing, evangelizing and even disapassionately reporting on the latest happenings in the areas that interest me, people from every corner of the globe. That’s certainly preferable to the old world and worth putting up with what comes along with putting the means of production in the hands of anyone with a connection to the Net.

That’s one way of looking at, and most of what Farber says is true. I don’t think it’s the whole story, though. The blogosphere’s a seductive place – it’s easy to get caught up in it – and there’s lots of interesting thoughts and opinions bouncing around amid the general clatter. But does it really provide a good way of becoming informed? Experiencing the blogosphere feels a lot like intellectual hydroplaning – skimming along the surface of many ideas, rarely going deep. It’s impressionistic, not contemplative. Fun? Sure. Invigorating? Absolutely. Socratic? I’m not convinced. Preferable to the old world? It’s nice to think so.

For all the self-important talk about social networks, couldn’t a case be made that the blogosphere, and the internet in general, is basically an anti-social place, a fantasy of community crowded with isolated egos pretending to connect? Sometimes, it seems like we’re all climbing up into our own little treehouses and eating jellybeans for breakfast.

Two-dimensional culture

In another great leap forward for two-dimensional culture, the U.S. Library of Congress is today proposing to build a “World Digital Library” of scanned artifacts from around the globe. In an op-ed in the Washington Post, the library’s top dog, James Billington, writes, “An American partnership in promoting such a project for UNESCO would show how we are helping other people recover distinctive elements of their cultures through a shared enterprise that may also help them discover more about the experience of our own and other free cultures.”

Does our arrogance have no bounds? Long the world’s cultural bulldozer, we’re now appointing ourselves to lead the way in creating a digital simulation that, says Billington, “would create for other cultures the documentary record of their distinctive achievements.” It’s so real you can almost touch it! Needless to say, Google’s the primary funder of the effort. As the Register puts it: “All your cultures are belong to us.”