Where did the computer go?

The following is an excerpt from “Burden’s Wheel,” the first chapter of The Big Switch: Rewiring the World, from Edison to Google, which is being published today by W. W. Norton & Company.

At a conference in Paris during the summer of 2004, Apple introduced an updated version of its popular iMac computer. Since its debut in 1998, the iMac had always been distinguished by its unusual design, but the new model was particularly striking. It appeared to be nothing more than a flat-panel television, a rectangular screen encased in a thin block of white plastic and mounted on an aluminum pedestal. All the components of the computer itself – the chips, the drives, the cables, the connectors – were hidden behind the screen. The advertising tagline wittily anticipated the response of prospective buyers: “Where did the computer go?”

But the question was more than just a cute promotional pitch. It was, as well, a subtle acknowledgment that our longstanding idea of a computer is obsolete. While most of us continue to depend on personal computers both at home and in the office, we’re using them in a very different way than we used to. Instead of relying on data and software that reside inside our computers, inscribed on our private hard drives, we increasingly tap into data and software that stream through the public Internet. Our PCs are turning into terminals that draw most of their power and usefulness not from what’s inside them but from the network they’re hooked up to – and, in particular, from the other computers that are hooked up to that network.

The change in the way we use computers didn’t happen overnight. Primitive forms of centralized computing have been around for a long time. In the mid-1980s, many early PC owners bought modems to connect their computers over phone lines to central databases like Compuserve, Prodigy, and the Well – commonly known as “bulletin boards” – where they exchanged messages with other subscribers. America Online popularized this kind of online community, greatly expanding its appeal by adding colorful graphics as well as chat rooms, games, weather reports, magazine and newspaper articles, and many other services. Other, more specialized databases were also available to scholars, engineers, librarians, military planners, and business analysts. When, in 1990, Tim Berners-Lee invented the World Wide Web, he set the stage for the replacement of all those private online data stores with one vast public one. The Web popularized the Internet, turning it into a global bazaar for sharing digital information. And once easy-to-use browsers like Netscape Navigator and Internet Explorer became widely available in the mid-1990s, we all went online in droves.

Through the first decade of its existence, however, the World Wide Web was a fairly prosaic place for most of us. We used it mainly as a giant catalogue, a collection of “pages” bound together with hyperlinks. We “read” the Web, browsing through its contents in a way that wasn’t so different from the way we’d thumb through a pile of magazines. When we wanted to do real work, or play real games, we’d close our Web browser and launch one of the many programs installed on our own hard drive – Microsoft Word, maybe, or Aldus Pagemaker, or Encarta, or Myst.

But beneath the Web’s familiar, page-like surface lay a set of powerful technologies, including sophisticated protocols for describing and transferring data, that promised not only to greatly magnify the usefulness of the Internet but to transform computing itself. These technologies would allow all the computers hooked up to the Net to act, in effect, as a single information-processing machine, easily sharing bits of data and software code. Once the technologies were fully harnessed, you’d be able to use the Internet not just to look at pages on discrete sites but to run sophisticated software programs that might draw information from many sites and databases simultaneously. You’d be able not only to “read” from the Internet but to “write” to it as well – just as you’ve always been able to read from and write to your PC’s hard drive. The World Wide Web would turn into the World Wide Computer.

This other dimension of the Internet was visible from the start, but only dimly so. When you ran a Web search on an early search engine like AltaVista, you were running a software program through your browser. The code for the software resided mainly on the computer that hosted AltaVista’s site. When you did online banking, shifting money between a checking and a savings account, you were also using a utility service, one that was running on your bank’s computer rather than your own. When you used your browser to check your Yahoo or Hotmail email account, or track a UPS shipment, you were using a complicated application running on a distant server computer. Even when you used Amazon.com’s shopping-cart system to order a book – or when you subsequently posted a review of that book on the Amazon site – you were tapping into the Internet’s latent potential.

For the most part, the early utility services were rudimentary, involving the exchange of a small amount of data. The reason was simple: More complex services, the kind that might replace the software on your hard drive, required the rapid transfer of very large quantities of data, and that just wasn’t practical with traditional, low-speed dial-up connections. Running such services would quickly overload the capacity of telephone lines or overwhelm your modem. Your PC would grind to a halt. Before sophisticated services could proliferate, a critical mass of people had to have high-speed, broadband connections. That only began to happen late in the 1990s during the great dotcom investment boom, when phone and cable companies rushed to replace their copper wires with optical fibers – hair-thin strands of glass that carry information as pulses of light rather than electric currents – and retool their networks to carry virtually unlimited quantities of data.

The first clear harbinger of the second coming of the Internet – what would eventually be dubbed Web 2.0 – appeared out of nowhere in the summer of 1999. It came in the form of a small, free software program called Napster. Written over a few months by an 18-year-old college dropout named Shawn Fanning, Napster allowed people to share music over the Internet in a whole new way. It scanned the hard drive of anyone who installed the program, and then it created, on a central server computer operated by Fanning, a directory of information on all the song files it found, cataloguing their titles, the bands that performed them, the albums they came from, and their audio quality. Napster users searched this directory to find songs they wanted, which they then downloaded directly from other users’ computers. It was easy and, if you had a broadband connection, it was fast. In a matter of hours, you could download hundreds of tunes.

Napster, not surprisingly, became wildly popular, particularly on college campuses where high-speed Net connections were common. By early 2001, according to an estimate by market researcher Media Metrix, more than 26 million people were using the service, and they were spending more than 100 million hours a month exchanging music files. Shawn Fanning’s invention showed the world, for the first time, how the Internet could allow many computers to act as a single shared computer, with thousands or even millions of people having access to the combined contents of previously private databases. Although every user had to install a little software program on his own PC, the real power of Napster lay in the network itself – in the way it created a central file-management system and the way it allowed data to be transferred easily between computers, even ones running on opposite sides of the globe.

There was just one problem. It wasn’t legal. The vast majority of the billions of songs downloaded through Napster were owned by the artists and record companies that had produced them. Sharing them without permission or payment was against the law. The arrival of Napster had turned millions of otherwise law-abiding citizens into digital shoplifters, setting off the greatest, or at least the broadest, orgy of looting in history. The musicians and record companies fought back, filing lawsuits charging Fanning’s company with copyright infringement. Their legal counterattack culminated in the closing of the service in the summer of 2001, just two years after it had launched.

Napster died, but the business of supplying computing services over the Internet exploded in its wake. Many of us now spend more time using the new Web 2.0 services than we do running traditional software applications from our hard drives. We rely on the new utility grid to connect with our friends at social networks like MySpace and Facebook, to manage our photo collections at sites like Flickr and Photobucket, to create imaginary selves in virtual worlds like World of Warcraft and Disney’s Club Penguin, to watch videos at sites like YouTube and Joost, to write blogs with WordPress or memos with Google Docs, to follow breaking news through feed readers like Rojo or Bloglines, and to store our files on “virtual hard drives” like Omnidrive and Box.

All these services hint at the revolutionary potential of the new computing grid and the information utilities that run on it. In the years ahead, more and more of the information-processing tasks that we rely on, at home and at work, will be handled by big data centers located out on the Internet. The nature and economics of computing will change as dramatically as the nature and economics of mechanical power changed with the rise of electric utilities in the early years of the last century. The consequences for society – for the way we live, work, learn, communicate, entertain ourselves, and even think – promise to be equally profound. If the electric dynamo was the machine that fashioned twentieth century society – that made us who we are – the information dynamo is the machine that will fashion the new society of the twenty-first century.