The end of corporate computing (10th anniversary edition)


Last week, in its quarterly earnings report, revealed for the first time how much money its cloud computing operation, Amazon Web Services, takes in. The numbers were impressive. AWS has become an $8 billion business, and its revenues continue to grow swiftly, nearly doubling in the most recent quarter from the same period last year. The unit’s profit margin — a surprisingly robust 21 percent — is vastly wider than that of the company’s retailing operation. Indeed, without AWS, Amazon would have lost a lot of money in the quarter instead of posting a narrow profit.

AWS’s results show how well established “the cloud” has become. Most personal computing these days relies on cloud services — lose your connection, and your computing device becomes pretty much useless — and businesses, too, are looking more and more to the cloud, rather than their own data centers, to fill their information technology needs. It’s easy to forget how quickly this epochal shift in the nature of computing has occurred. Just ten years ago, the term “cloud computing” was unknown, and the idea that computing would become a centrally managed utility service was considered laughable by many big IT companies and their customers. Back then, in 2005, I wrote an article for MIT’s Sloan Management Review titled “The End of Corporate Computing” in which I argued that computing was fated to become a utility, with big, central data centers feeding services to customers over the internet’s grid. (The article inspired my 2008 book The Big Switch.) I got plenty of things wrong in the article, but I think the ensuing ten years have shown that the piece was fundamentally on target in predicting the rise of what we now call the cloud. So here, to mark the tenth birthday of the article, is the full text of “The End of Corporate Computing.”

Something happened in the first years of the 20th century that would have seemed unthinkable just a few decades earlier: Manufacturers began to shut down and dismantle their waterwheels, steam engines and electric generators. Since the beginning of the Industrial Age, mills and factories had had no choice but to maintain private power plants to run their machinery — power generation was a seemingly intrinsic part of doing business — but as the new century dawned, an alternative was emerging. Dozens of fledgling electricity producers were erecting central generating stations and using a network of wires to distribute their power to distant customers. Manufacturers no longer had to run their own dynamos; they could simply buy the electricity they needed, as they required it, from the new suppliers. Power generation was being transformed from a corporate function into a utility.

Now, almost exactly a century later, history is repeating itself. The most important commercial development of the last 50 years — information technology — is undergoing a similar transformation. It, too, is beginning an inexorable shift from being an asset that companies own — in the form of computers, software and myriad related components —to being a service that they purchase from utility providers. Few in the business world have contemplated the full magnitude of this change or its far-reaching consequences. To date, popular discussions of utility computing have rarely progressed beyond a recitation of IT vendors’ marketing slogans, laden with opaque terms like “autonomic systems,” “server virtualization” and “service-oriented architecture” [1]. Rather than illuminate the future, such gobbledygook has only obscured it.

The prevailing rhetoric is, moreover, too conservative. It assumes that the existing model of IT supply and use — and the corporate data center that lies at its core — will endure. But that view is perilously short-sighted. The traditional model’s economic foundation is already crumbling, and it is unlikely to survive in the long run. As the earlier transformation of electricity supply suggests, IT’s shift from a fragmented capital asset to a centralized utility service will be a momentous one. It will overturn strategic and operating assumptions, alter industrial economics, upset markets, and pose daunting challenges to every user and vendor. The history of the commercial application of information technology has been characterized by astounding leaps, but nothing that has come before — not even the introduction of the personal computer or the opening of the Internet — will match the upheaval that lies just over the horizon. Continue reading

How to write a book when you’re paid by the page


When I first heard that Amazon was going to start paying its Kindle Unlimited authors according to the number of pages in their books that actually get read, I wondered whether there might be an opportunity for an intra-Amazon arbitrage scheme that would allow me to game the system and drain Jeff Bezos’s bank account. I thought I might be able to start publishing long books of computer-generated gibberish and then use Amazon’s Mechanical Turk service to pay Third World readers to scroll through the pages at a pace that would register each page as having been read. If I could pay the Turkers a fraction of a penny less to look at a page than Amazon paid me for the “read” page, I’d be able to get really rich and launch my own space exploration company.

Alas, I couldn’t make the numbers work. Amazon draws the royalties for the program from a fixed pool of funds, which serves to cap the upside for devious scribblers.

So much for my Mars vacation. Still, even in a zero-sum game that pits writer against writer, I figured I might be able to steal a few pennies from the pockets of my fellow authors. (I hate them all, anyway.) I would just need to do a better job of mastering the rules of the game, which Amazon was kind enough to lay out for me:

Under the new payment method, you’ll be paid for each page individual customers read of your book, the first time they read it. … To determine a book’s page count in a way that works across genres and devices, we’ve developed the Kindle Edition Normalized Page Count (KENPC). We calculate KENPC based on standard settings (e.g. font, line height, line spacing, etc.), and we’ll use KENPC to measure the number of pages customers read in your book, starting with the Start Reading Location (SRL) to the end of your book.

The first thing that has to be said is that if you’re a poet, you’re screwed. That page-normalization deal is going to kill you. I mean, Walt Whitman might do okay. But Mary Oliver? Totally hosed. So that manuscript of dense, trimetric verse you’ve been fussing over for the last twenty years? Shred it. Continue reading

Music is the oil in the human machine


In announcing the free version of its music streaming service — that’s free as in ads — Google also discloses something revealing about the way it views music:

At any moment in your day, Google Play Music has whatever you need music for — from working, to working out, to working it on the dance floor — and gives you curated radio stations to make whatever you’re doing better. Our team of music experts, including the folks who created Songza, crafts each station song by song so you don’t have to.

This marks a continuation of Google’s promotion of what it terms “activity-based” music. Last year, soon after it acquired Songza, a company that specializes in “curating” playlists to suit particular moods and activities, Google rejiggered its music service to emphasize its practicality:

If you’re a Google Play Music subscriber, next time you open the app you’ll be prompted to play music for a time of day, mood or activity. Choose an activity to get options for several music stations to make whatever you’re doing even better — whether it’s a station for a morning workout, songs to relieve stress during traffic, or the right mix for cooking with friends. Each station has been handcrafted — song by song — by our team of music experts (dozens of DJs, musicians, music critics and ethnomusicologists) to give you the exact right song for the moment.

This is the democratization of the Muzak philosophy. Music becomes an input, a factor of production. Listening to music is not itself an “activity” — music isn’t an end in itself — but rather an enhancer of other activities, each of which must be clearly demarcated. (As I’ve argued before, the fuzziness of human experience is anathema to Silicon Valley. Before you can code it, you have to formalize it.) Continue reading

When triumphalists fail, they fail triumphantly


Progress turns everyone into a nostalgist sooner or later. You just have to wait for your own particular trigger to come along — the new thing that threatens the old thing you love.

David Weinberger has a new article in The Atlantic called “The Internet That Was (and Still Could Be).” It’s a tortured and ultimately dishonest piece that calls to mind some lines from a great old Buzzcocks tune:

About the future I only can reminisce
For what I’ve had is what I’ll never get
And although this may sound strange
My future and my past are presently disarranged
And I’m surfing on a wave of nostalgia
For an age yet to come.

Weinberger, coauthor of The Cluetrain Manifesto and author of Small Pieces Loosely Joined, has long argued that the “architecture” of the internet provides not only a metaphor but an actual working model for a more perfect society. The net was created with data-communication protocols that enabled “packets of information [to be moved] around without any central management or control,” and that technical architecture, he contends, not only facilitates but promotes democratic values such as “open access to information” and “the permission-free ability to read and to post.” Spanning civil and commercial interests, the net is “an open market of ideas and businesses” that provides “a framework for bottom-up collaboration among equals.” Continue reading

Media takes command

Last Saturday, I had the pleasure of addressing the annual convention of the Media Ecology Association in Denver. The title of my talk was “Media Takes Command: An Inquiry into the Consequences of Automation.” Here is what I said, along with the slides that accompanied the remarks.

media ecology.001

As I was trying to figure out what to talk about this afternoon, I found myself flipping through a copy of Neil Postman’s Amusing Ourselves to Death — the twentieth anniversary edition. I started thinking about one of the promotional blurbs printed at the front of the book. A reviewer for the Christian Science Monitor had written that Postman “starts where Marshall McLuhan left off, constructing his arguments with the resources of a scholar and the wit of a raconteur.”

media ecology.002

I can’t make claim to either the resources of a scholar or the wit of a raconteur, but at least I can follow Postman’s lead in starting where McLuhan left off. In fact, I’d like to start literally where he left off, with the final line of his most influential work, the 1964 book Understanding Media:

“Panic about automation as a threat of uniformity on a world scale is the projection into the future of mechanical standardization and specialism, which are now past.”

That’s not one of McLuhan’s better sentences. But it does include a couple of ideas that seem pertinent to our current situation. Continue reading

The seconds are just packed


This post is the final installment in Rough Type’s Realtime Chronicles, which began here in 2009. An earlier version of this post appeared at

“Everything is going too fast and not fast enough,” laments Warren Oates, playing a decaying gearhead called G.T.O., in Monte Hellman’s 1971 masterpiece Two-Lane Blacktop. I can relate. The faster the clock spins, the more I feel as if I’m stuck in a slo-mo GIF loop.

It’s weird. We humans have been shown to have remarkably accurate internal clocks. Take away our wristwatches and our cell phones, dim the LEDs on all our appliances and gizmos, and we can still make pretty good estimates about the passage of minutes and hours. Our brains have adapted well to mechanical time-keeping devices. But our time-tracking faculty goes out of whack easily. Our perception of time is subjective; it changes, as we all know, with circumstances. When things are happening quickly around us, delays that would otherwise seem brief begin to feel interminable. Seconds stretch out. Minutes go on forever. “Our sense of time,” observed William James in his 1890 Principles of Psychology, “seems subject to the law of contrast.”

In a 2009 article in the Philosophical Transactions of the Royal Society, the French psychologists Sylvie Droit-Volet and Sandrine Gil described what they call the paradox of time: “although humans are able to accurately estimate time as if they possess a specific mechanism that allows them to measure time,” they wrote, “their representations of time are easily distorted by the context.” They describe how our sense of time changes with our emotional state. When we’re agitated or anxious, for instance, time seems to crawl; we lose patience. Our social milieu, too, influences the way we experience time. Studies suggest, write Droit-Volet and Gill, “that individuals match their time with that of others.” The “activity rhythm” of those around us alters our own perception of the passing of time. Continue reading

A reasonable part of the house


There was, in most homes, a small, boxy machine affixed to the wall, usually in the kitchen, and this machine was called a telephone. —Wikipedia, 2030

The home telephone had a good hundred-year run. Its days are numbered now. Its name, truncated to just phone, will live on, attached anachronistically to the diminutive general-purpose computers we carry around with us. (We really should have called them teles rather than phones.) But the object itself? It’s headed for history’s landfill, one layer up from the PalmPilot and the pager.

A remarkable thing about the telephone, in retrospect, is that it was a shared device. It was familial rather than personal. That entailed some complications.

In his monumental study of the forms of human interlocution, published posthumously in 1992 as the two-volume Lectures on Conversation, the sociologist Harvey Sacks explained how the arrival of the home telephone introduced a whole new role in conversation: that of the answerer. There was the caller, there was the called, and then there was the answerer, who might or might not also be the called. The caller would never know for sure who would answer the phone — it might be the called’s mom or dad rather than the called — and what kind of pre-conversational rigamarole might need to be endured, what pleasantries might need to be exchanged, what verbal gauntlet might need to be run, before the called would actually take the line. As for the answerer, he or she would not know, upon picking up the phone, whether he or she would also be playing the role of the called or would merely serve as the answerer, a kind of functionary or go-between. Each ringing of the telephone set off little waves of subterranean tension in the household: expectation, apprehension, maybe even some resentment.


“Is Amy there?”

“Who’s calling?”


In non-professional settings by and large, it’s from among the possible calleds that answerers are selected; answerer being now a merely potential resting state, where you’ve made preparations for turning out to be the called right off when you say “Hello.” Answerers can become calleds, or they can become non-calleds-but-talked-to, or they can remain answerers, in the sense of not being talked to themselves, and also having what turn out to be obligations incumbent on being an answerer-not-called; obligations like getting the called or taking a message for the called.

As I said: complications. And also: an intimate entwining of familial interests.

The answerer, upon realizing that he is not the called, Sacks continues, occupies “the least happy position” in the exchange.

Having done the picking up of the phone, they have been turned into someone at the mercy of the treatment that the caller will give them: What kind of jobs are they going to impose? Are they even going to talk to them? A lot of family world is implicated in the way those little things come out, an enormous amount of conflict turning on being always the answerer and never the called, and battles over who is to pick up the phone.

“I’ll get it!”

But what exactly will you get?

And so here we have this strange device, this technology, and it suddenly appears in the midst of the home, in the midst of the family, crouching there with all sorts of inscrutable purposes and intents. And yet — and this is the most remarkable thing of all — it doesn’t take long for it to be accommodated, to come to feel as though it’s a natural part of the home. Rather than remaking the world, Sacks argues, the telephone was subsumed into the world. The familial and social dynamics that the telephone revealed, with each ring, each uncradling of the receiver, are ones that were always already there.

Here’s an object introduced into the world 75 years ago. And it’s a technical thing which has a variety of aspects to it. It works only with voices, and because of economic considerations people share it … Now what happens is, like any other natural object, a culture secretes itself onto it in its well-shaped ways. It turns this technical apparatus which allows for conversation, into something in which the ways that conversation works are more or less brought to bear …

What we’re studying, then, is making the phone a reasonable part of the house. … We can read the world out of the phone conversation as well as we can read it out of anything else we’re doing. That’s a funny kind of thing, in which each new object becomes the occasion for seeing again what we can see anywhere; seeing people’s nastinesses or goodnesses and all the rest, when they do this initially technical job of talking over the phone. This technical apparatus is, then, being made at home with the rest of our world. And that’s a thing that’s routinely being done, and it’s the source for the failures of technocratic dreams that if only we introduced some fantastic new communication machine the world will be transformed. Where what happens is that the object is made at home in the world that has whatever organization it already has.

“Who is it?”

“It’s me.”

Image: detail of a Bell System advertisement, circa 1960.