Last week, in its quarterly earnings report, Amazon.com revealed for the first time how much money its cloud computing operation, Amazon Web Services, takes in. The numbers were impressive. AWS has become an $8 billion business, and its revenues continue to grow swiftly, nearly doubling in the most recent quarter from the same period last year. The unit’s profit margin — a surprisingly robust 21 percent — is vastly wider than that of the company’s retailing operation. Indeed, without AWS, Amazon would have lost a lot of money in the quarter instead of posting a narrow profit.
AWS’s results show how well established “the cloud” has become. Most personal computing these days relies on cloud services — lose your connection, and your computing device becomes pretty much useless — and businesses, too, are looking more and more to the cloud, rather than their own data centers, to fill their information technology needs. It’s easy to forget how quickly this epochal shift in the nature of computing has occurred. Just ten years ago, the term “cloud computing” was unknown, and the idea that computing would become a centrally managed utility service was considered laughable by many big IT companies and their customers. Back then, in 2005, I wrote an article for MIT’s Sloan Management Review titled “The End of Corporate Computing” in which I argued that computing was fated to become a utility, with big, central data centers feeding services to customers over the internet’s grid. (The article inspired my 2008 book The Big Switch.) I got plenty of things wrong in the article, but I think the ensuing ten years have shown that the piece was fundamentally on target in predicting the rise of what we now call the cloud. So here, to mark the tenth birthday of the article, is the full text of “The End of Corporate Computing.”
Something happened in the first years of the 20th century that would have seemed unthinkable just a few decades earlier: Manufacturers began to shut down and dismantle their waterwheels, steam engines and electric generators. Since the beginning of the Industrial Age, mills and factories had had no choice but to maintain private power plants to run their machinery — power generation was a seemingly intrinsic part of doing business — but as the new century dawned, an alternative was emerging. Dozens of fledgling electricity producers were erecting central generating stations and using a network of wires to distribute their power to distant customers. Manufacturers no longer had to run their own dynamos; they could simply buy the electricity they needed, as they required it, from the new suppliers. Power generation was being transformed from a corporate function into a utility.
Now, almost exactly a century later, history is repeating itself. The most important commercial development of the last 50 years — information technology — is undergoing a similar transformation. It, too, is beginning an inexorable shift from being an asset that companies own — in the form of computers, software and myriad related components —to being a service that they purchase from utility providers. Few in the business world have contemplated the full magnitude of this change or its far-reaching consequences. To date, popular discussions of utility computing have rarely progressed beyond a recitation of IT vendors’ marketing slogans, laden with opaque terms like “autonomic systems,” “server virtualization” and “service-oriented architecture” . Rather than illuminate the future, such gobbledygook has only obscured it.
The prevailing rhetoric is, moreover, too conservative. It assumes that the existing model of IT supply and use — and the corporate data center that lies at its core — will endure. But that view is perilously short-sighted. The traditional model’s economic foundation is already crumbling, and it is unlikely to survive in the long run. As the earlier transformation of electricity supply suggests, IT’s shift from a fragmented capital asset to a centralized utility service will be a momentous one. It will overturn strategic and operating assumptions, alter industrial economics, upset markets, and pose daunting challenges to every user and vendor. The history of the commercial application of information technology has been characterized by astounding leaps, but nothing that has come before — not even the introduction of the personal computer or the opening of the Internet — will match the upheaval that lies just over the horizon.
From Asset to Expense
Information technology, like steam power and electricity before it, is what economists call a general-purpose technology . It is used by all sorts of companies to do all kinds of things, and it brings widespread and fundamental changes to commerce and society. Because of its broad application, a general-purpose technology offers the potential for considerable economies of scale if its supply can be consolidated. But those economies can take a long time to be fully appreciated and even longer to be comprehensively exploited. During the early stages in the development of a general-purpose technology, when there are few technical standards and no broad distribution network, the technology is impossible to furnish centrally. By necessity its supply is fragmented. Individual companies have to purchase the various components required to use the technology, house those parts on site, meld them into a working system and hire a staff of specialists to maintain them.
Such fragmentation of supply is inherently wasteful. It forces large capital investments and heavy fixed costs on firms, and it leads to redundant expenditures and high levels of overcapacity, both in the technology itself and in the labor force operating it. The situation is ideal for the suppliers of the components of the technology — they reap the benefits of overinvestment — but it is ultimately unsustainable. As the technology matures and central distribution becomes possible, large-scale utility suppliers arise to displace the private providers. Although companies may take years to abandon their proprietary supply operations (and all the sunk costs they represent), the savings offered by utilities eventually become too compelling to resist, even for the largest enterprises. Abandoning the old model becomes a competitive necessity.
The evolution of electricity supply provides a clear model of this process. When the commercial production of electricity became possible around 1880, many small utility suppliers quickly popped up in urban areas. These were largely mom-and-pop operations that used tiny coal-fired dynamos to generate modest amounts of power. The electricity they produced was in the form of direct current, which could not be transmitted far, so their service distance was limited to about a mile. And their high-cost operations forced them to charge steep prices, so their customers were generally restricted to prosperous stores and offices, wealthy homeowners, and municipal agencies, all of which used the electricity mainly for lighting.
For large industrial concerns, relying on these small central stations was not an option. To produce the great quantities of reliable electricity needed to run their plants, they had no choice but to build their own dynamos. They would contract with electrical supply houses like General Electric and Westinghouse to provide the components of on-site generators as well as the expertise and personnel needed to construct them, and they would hire electrical engineers and other specialists to operate the complex equipment and meld it with their production processes. During the early years of electrification, privately owned dynamos quickly came to dominate. By 1902, 50,000 private generating plants had been built in the United States, far outstripping the 3,600 central stations run by utilities . By 1907, factories were producing about 60% of all the electricity used in the country .
But even as big manufacturers rushed to set up in-house generators, some small industrial concerns, such as urban printing shops, were taking a different route. They couldn’t afford to build generators and hire workers to maintain them, so they had to rely on nearby central stations, even if that meant paying high per-kilowatt rates and enduring frequent disruptions in supply. At the time, these small manufacturers must have felt like laggards in the race to electrification, forced to adopt a seemingly inferior supply model in order to tap into the productivity gains of electric power. As it turned out, they were the vanguard. Soon, even their largest counterparts would be following their lead, drawn by the increasingly obvious advantages of purchasing electricity from outside suppliers.
A series of technical advances set the stage for that shift. First, massive thermal turbines were developed, offering the potential for much greater economies of scale. Second, the introduction of alternating current allowed power to be transmitted over great distances, expanding the sets of customers that central plants could serve. Third, converters were created that enabled utilities to switch between different forms of current, allowing old equipment to be incorporated into the new system. Finally, electric motors capable of operating on alternating current were invented, enabling factories to tap into the emerging electric grid to run their machines. As early as 1900, all the technological pieces were in place to centralize the supply of power to manufacturers and to render obsolete their isolated power plants .
But technical progress was not enough. To overturn the status quo, a business visionary was needed, someone able to see how the combination of technological, market and economic trends could lead to an entirely new model of utility supply. That person arrived in the form of a bespectacled English bookkeeper named Samuel Insull. Infatuated by electricity, Insull had emigrated to New York in 1880 and soon became Thomas Edison’s most trusted advisor, helping the famous inventor expand his business empire. But Insull’s greatest achievement came after he left Edison’s employ, in 1892, when he moved to Chicago to assume the presidency of a small, independent power producer with three central stations and just 5,000 customers. In less than 25 years, he would turn that little company into one of the country’s largest enterprises, a giant monopolistic utility named Commonwealth Edison.
Insull was the first to realize that, by capitalizing on new technologies to consolidate generating capacity, centralized utilities could fulfill the power demands of even the largest factories. Moreover, utilities’ superior economies of scale, combined with their ability to spread demand across many users and thus achieve higher capacity-utilization rates, would enable them to provide much cheaper electricity than the manufacturers could achieve with their private, sub-scale dynamos. Insull acted aggressively on his insight, buying up small utilities throughout Chicago and installing mammoth 5,000-kilowatt generators in his own plants. Just as important, he pioneered electricity metering and variable pricing, which enabled him to slash the rates charged to big users and further smooth demand. Finally, he launched an elaborate marketing campaign to convince manufacturers that they would be better off shutting down their generators and buying electricity from his utility .
Insull’s vision became reality, as Chicago manufacturers flocked to his company. In 1908, a reporter for Electrical World and Engineer noted that, “although isolated plants are still numerous in Chicago, they were never so hard pressed by central station service as now. . . . The Commonwealth Edison Company has among its customers establishments formerly run by some of the largest isolated plants in the city” . The tipping point had arrived. Although many manufacturers would continue to produce their own electricity for years, the transition from private to utility power was under way. Between 1907 and 1920, utilities’ share of total U.S. electricity production jumped from 40% to 70%; by 1930, it had reached 80% .
By turning electricity from a complex asset into a routine variable expense, manufacturers reduced their fixed costs and freed up capital for more productive purposes. At the same time, they were able to trim their corporate staffs, temper the risk of technology obsolescence and malfunction, and relieve their managers of a major source of distraction. Once unimaginable, the broad adoption of utility power had become inevitable. The private power plant was obsolete.
IT’s Transformation Begins
Of course, all historical analogies have their limits, and information technology differs from electricity in many important ways. As a commodity, information shares little in common with electric current. But there are deep similarities as well — similarities that are easy for modern-day observers to overlook. Today, people see electricity as a “simple” utility, a standardized and unremarkable current that comes safely and predictably through sockets in walls. The innumerable applications of electric power, from table lamps in homes to machine tools on assembly lines, have become so commonplace that we no longer consider them to be elements of the underlying technology — they’ve taken on separate, familiar lives of their own. But it wasn’t always so.
When electrification began, it was a complex, unpredictable and largely untamed force that changed almost everything it touched. Its application layer, to borrow a modern term, was as much a part of the technology as the dynamos, the power lines and the current itself. All companies had to figure out how to apply electricity to their own businesses, often making sweeping changes to longstanding practices, work flows and organizational structures. And as the technology advanced, they had to struggle with old and often incompatible equipment — the “legacy systems” that can impede progress.
As a business resource, or input, information technology today certainly looks a lot like electric power did at the start of the last century. Companies go to vendors to purchase various components — computers, storage drives, network switches and all sorts of software — and cobble them together into complex information-processing plants, or data centers, that they house within their own walls. They hire specialists to maintain the plants, and they often bring in outside consultants to solve particularly thorny problems. Their executives are routinely sidetracked from their real business — manufacturing automobiles, for instance, and selling them at a profit — by the need to keep their company’s private IT infrastructure running smoothly.
The creation of tens of thousands of independent data centers, all using virtually the same hardware and, for the most part, running similar software, has imposed severe penalties on individual firms as well as the broader economy . It has led to the overbuilding of IT assets, resulting in extraordinarily low levels of capacity utilization. One recent study of six corporate data centers revealed that most of their 1,000 servers were using just 10% to 35% of their available processing power . Desktop computers fare even worse, with IBM estimating average capacity utilization rates of just 5% . Gartner Inc., the research consultancy based in Stamford, Connecticut, suggests that between 50% and 60% of a typical company’s data storage capacity is wasted .
Overcapacity is by no means limited to hardware. Because software applications are highly scalable — able, in other words, to serve additional users at little or no incremental cost — redundant installations of common programs also create acute diseconomies, in both upfront expenditures and ongoing costs and fees. The replication, from company to company, of IT departments with largely interchangeable skills represents an overinvestment in labor as well. According to a 2003 survey, about 60% of the average company’s IT staffing budget goes to routine support and maintenance. 
When overcapacity is combined with redundant functionality, the conditions are ripe for a shift to centralized supply. Yet companies continue to invest large sums in maintaining and even expanding their private, subscale data centers. Why? For the same reason that manufacturers continued to install private electric generators during the early decades of the 20th century: because of the lack of a viable, large-scale utility model. But the emergence of that model is well under way. Rudimentary forms of utility computing are proliferating today, and many companies are moving quickly to capitalize on them. Some are using the vast data centers maintained by vendors like IBM, Hewlett-Packard and Electronic Data Systems to supplement or provide an emergency backup to their own hardware. Others are tapping into applications that run on the computers of distant software suppliers. Such hosted applications, ranging from a transportation management application (from LeanLogistics of Holland, Michigan) to an airline reservation system (from Amadeus, headquartered in Madrid) to a customer-service program (from RightNow Technologies of Bozeman, Montana), demonstrate that even very complex applications can be supplied as utility services over the Internet.
What these early efforts don’t show is the full extent and power of a true utility model. Today’s piecemeal utility services exist as inputs into traditional data centers; individual companies still have to connect them with their old hardware and software. Indeed, firms often forgo otherwise attractive utility services because the required integration with their legacy systems is too difficult. Only when an outside supplier takes responsibility for delivering all of a company’s IT requirements, from data processing to storage to applications, will true utility computing have arrived. The utility model requires that ownership of the assets that have traditionally resided inside widely dispersed data centers be consolidated and transferred to utilities.
That process will take years to unfold, but the technological building blocks are already moving into place. Here, three advances — virtualization, grid computing, and Web services — are of particular importance, although their significance has often been obscured by the arcane terminology used to describe them. These three technologies play, in different ways, a role similar to that of the early current converters: They enable a large, tightly integrated system to be constructed out of heterogeneous and previously incompatible components. Virtualization erases the differences between proprietary computing platforms, enabling applications designed to run on one operating system to be deployed elsewhere. Grid computing allows large numbers of hardware components, such as servers or disk drives, to effectively act as a single device, pooling their capacity and allocating it automatically to different jobs. Web services standardize the interfaces between applications, turning them into Lego-like modules that can be assembled and disassembled easily.
Individually all these technologies are interesting, but in combination they become truly revolutionary. Together with high-capacity, fiber-optic communication networks, they can turn a fragmented, unwieldy set of hardware and software components into a single, flexible infrastructure that numerous companies can share and deploy, each in a different way. And as the number of users served by a system goes up, its demand load becomes more balanced, its capacity utilization rate rises and its economies of scale expand. Given that these technologies will evolve and advance, while new and related ones emerge, the ability to provide IT as a utility — and the economic incentives for doing so — will only continue to grow.
The biggest impediment to utility computing will not be technological but attitudinal. As in the shift to centralized electrical power, the prime obstacle will be entrenched management assumptions and the traditional practices and past investments on which they are founded. Large companies will pull the plug on their data centers only after the reliability, stability and benefits of IT utilities have been clearly established. For that to occur, a modern-day Samuel Insull needs to arrive with a clear vision of how the IT utility business will operate as well as the imagination and wherewithal to make it happen. Like his predecessor, this new visionary will build highly efficient, large-scale IT plants, weave together sophisticated metering and pricing systems, and offer attractive and flexible sets of services tailored to diverse clients . And he will make a compelling marketing case to corporate executives, demonstrating that the centralized management of previously dispersed resources not only cuts costs and frees up capital, but also improves security, enhances flexibility and reduces risk. He will, in short, invent an industry.
The Shape of a New Industry
Exactly what that industry will look like remains to be seen, but it’s possible to envision its contours. It will likely have three major components. At the center will be the IT utilities themselves — big companies that will maintain core computing resources in central plants and distribute them to end users. Serving the utilities will be a diverse array of component suppliers — the makers of computers, storage units, networking gear, operating and utility software, and applications. And finally, large network operators will maintain the ultra-high-capacity data-communication lines needed for the system to work. Some companies will no doubt try to operate simultaneously in more than one of these categories.
What’s particularly striking about this model is that it reveals the unique characteristics that make IT especially well-suited to becoming a utility service. With electricity, only the basic generation function can be centralized, and the applications are delivered physically, through motors, light bulbs and other electronic devices that have to be provisioned locally, at the user’s site. With IT, the immediate applications take the form of software, which can be run remotely by a utility or one of its suppliers. Even applications customized to a single customer can be housed at a supplier’s site. The end user only needs to maintain various input and output devices — monitors, printers, keyboards, scanners, portable devices, sensors and the like — necessary to receive, transmit and manipulate data, and, as necessary, reconfigure the package of services received. Some customers may well choose to run certain applications locally, but utilities will be able to own and operate the bulk of the hardware and software, further magnifying their scale advantages.
Which companies will emerge as the new IT utilities? At least four possibilities exist. First are the big traditional makers of enterprise computing hardware that have deep experience in setting up and running complex business systems — companies like IBM, Hewlett-Packard and Sun Microsystems, all of which, not surprisingly, have already been aggressively positioning themselves as suppliers of utility services. Second are various specialized hosting firms, like VeriCenter in Houston or MCI’s Digex subsidiary, that even today are running the entire data centers of some small and mid-sized companies. These specialized firms, which struggled to survive after the dot-com collapse, are beginning to resemble the operators of the original central stations during the early stages of electrification. Third are Internet innovators like Google and even Amazon.com that are building extensive, sophisticated computing networks that could theoretically be adapted to much broader uses . Finally, there are the as-yet-unknown startups that could emerge with ingenious new strategies. Because the utility industry will be scale-driven and capital-intensive, size and focus will be critical to success, and any company will find it difficult to dominate while also pursuing other business goals.
To date, utility computing seems to be following the pattern of disruptive innovation defined by Clayton Christensen of the Harvard Business School — initially gaining traction at the low end of the market before ultimately emerging as the dominant supply model . As such, it may pose grave threats to some of today’s most successful component suppliers, particularly companies like Microsoft, Dell, Oracle and SAP that have thrived by selling directly to corporations. The utility model promises to isolate these vendors from the end users, forcing them to sell their products and services to or through big, centralized utilities, which will have significantly greater bargaining power. Most of the broadly used components, from computers to operating systems to complex “enterprise applications” that automate common business processes, will likely be purchased as cheap, generic commodities .
Of course, today’s leading component suppliers have considerable market power and management savvy, and they have time to adapt their strategies as the evolution of the utility model proceeds. Some of them may end up trying to forward-integrate into the utility business itself, a move that has good precedent. When manufacturers began to purchase electricity from utilities, the two largest vendors of generators and associated components, General Electric and Westinghouse, expanded aggressively into that business, buying ownership stakes in many electric utilities. As early as 1895, GE had investments totaling more than $59 million in utilities across the United States and Europe .
But that precedent also reveals the dangers of such consolidation moves, for buyers and sellers alike. As the U.S. electricity business became increasingly concentrated in the hands of a few companies, the government, fearful of private monopoly control over such a critical resource, stepped in to impose greater restrictions on the industry. The components of IT are more diverse, but the possibility that a few companies will seize excessive control over the infrastructure remains a concern. Not only would monopolization lead to higher costs for end users, it might also retard the pace of innovation, to the detriment of many. Clearly, maintaining a strong degree of competition among both utilities and component suppliers will be essential to a healthy and productive IT sector in the coming years.
The View from the Future
Any prediction about the future, particularly one involving the pace and direction of technological progress, is speculative, and the scenario laid out here is no exception. But if technological advances are often unforeseeable, the economic and market forces that guide the evolution of business generally play out in logical and consistent ways. The history of commerce has repeatedly shown that redundant investment and fragmented capacity provide strong incentives for centralizing supply. And advances in computing and networking have allowed information technology to operate in an increasingly “virtual” fashion, with ever greater distances between the site of the underlying technological assets and the point at which people access, interpret and manipulate the information. Given this trend, radical changes in corporate IT appear all but inevitable.
Sometimes, the biggest business transformations seem inconceivable even as they’re occurring. Today when people look back at the supply of power in business, they see an evolution that unfolded with a clear and inevitable logic. It’s easy to discern that the practice of individual companies building and maintaining proprietary power plants was a transitory phenomenon, an artifact of necessity that never made much sense economically. From that viewpoint, electricity had to become a utility. But what seems obvious now must have seemed far-fetched, even ludicrous, to the factory owners and managers that had for decades maintained their own sources of power.
Now imagine what future generations will see when they look back at the current time a hundred years hence. Won’t the private data center seem just as transitory a phenomenon — just as much a stop-gap measure — as the private dynamo? Won’t the rise of IT utilities seem both natural and necessary? And won’t the way corporate computing is practiced today appear fundamentally illogical — and inherently doomed?
 There are notable exceptions. See, for example, M.A. Rappa, “The Utility Business Model and the Future of Computing Services,” IBM Systems Journal 43, no. 1 (2004): 32-42; and P.A. Strassmann, “Transforming IT,” Computerworld, Nov. 5, 2001.
 The term was introduced in a 1992 paper by T.F. Bresnahan and M. Trajtenberg, later published as “General Purpose Technologies: ‘Engines of Growth’?” Journal of Econometrics 65, no. 1 (1995): 83-108. See also E. Helpman, ed., “General Purpose Technologies and Economic Growth” (Cambridge, Mass.: MIT Press, 1998).
 A. Friedlander, “Power and Light: Electricity in the U.S. Energy Infrastructure, 1870-1940” (Reston, Virginia: Corporation for National Research Initiatives, 1996), 51.
 D.E. Nye, “Electrifying America: Social Meanings of a New Technology” (Cambridge, Mass.: MIT Press, 1990), 236.
 T.P. Hughes, “Networks of Power: Electrification in Western Society, 1880-1930” (Baltimore: Johns Hopkins University Press, 1983), 106-139; and R.B. DuBoff, “Electric Power in American Manufacturing, 1889-1958” (New York: Arno Press, 1979), 42-45.
 For more on Insull’s career and accomplishments, see Hughes (1983), 201-226; and H. Evans, “They Made America” (New York: Little, Brown, 2004), 318-333.
 “The Systems and Operating Practice of the Commonwealth Edison Company of Chicago,” Electrical World and Engineer 51 (1908): 1023, as quoted in Hughes (1983), 223.
 DuBoff (1979), 40.
 For a discussion of the homogenization of information technology in business, see N.G. Carr, “Does IT Matter? Information Technology and the Corrosion of Competitive Advantage” (Boston: Harvard Business School Press, 2004).
 A. Andrzejak, M. Arlitt and J. Rolia, “Bounding the Resource Savings of Utility Computing Models,” Hewlett Packard Laboratories Working Paper HPL-2002-339, Nov. 27, 2002.
 V. Berstis, “Fundamentals of Grid Computing,” IBM Redbooks Paper, 2002.
 C. Hildebrand, “Why Squirrels Manage Storage Better than You Do,” Darwin, April 2003.
 B. Gomolski, “Gartner 2003 IT Spending and Staffing Survey Results” (Garter Research, 2003).
 Effective and standardized metering systems will be as crucial to the formation of large-scale IT utilities as they were to electric utilities, and work in this area is progressing rapidly. See, for example, V. Albaugh and H. Madduri, “The Utility Metering Service of the Universal Management Infrastructure,” IBM Systems Journal 43, no. 1 (2004): 179-189.
 Google and Amazon.com already provide utility IT services. Companies draw on Google’s data centers and software to distribute advertisements over the Internet and to add search functions to their corporate Web sites. Amazon, in addition to running its own on-line store, rents its sophisticated retailing platform to other merchants such as Target, JC Penney and Borders.
 C.M. Christensen, “The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail” (Boston: Harvard Business School Press, 1997).
 It’s telling that today’s vendors of utility IT services, such as hosted applications and remote data centers, have been among the most aggressive adopters of open-source software and other commodity components.
 Nye (1990), 170-174.
Image: The Electricity Building, 1893 Columbian Exhibition, Chicago.