The end of corporate computing (10th anniversary edition)

electricitybuilding

Last week, in its quarterly earnings report, Amazon.com revealed for the first time how much money its cloud computing operation, Amazon Web Services, takes in. The numbers were impressive. AWS has become an $8 billion business, and its revenues continue to grow swiftly, nearly doubling in the most recent quarter from the same period last year. The unit’s profit margin — a surprisingly robust 21 percent — is vastly wider than that of the company’s retailing operation. Indeed, without AWS, Amazon would have lost a lot of money in the quarter instead of posting a narrow profit.

AWS’s results show how well established “the cloud” has become. Most personal computing these days relies on cloud services — lose your connection, and your computing device becomes pretty much useless — and businesses, too, are looking more and more to the cloud, rather than their own data centers, to fill their information technology needs. It’s easy to forget how quickly this epochal shift in the nature of computing has occurred. Just ten years ago, the term “cloud computing” was unknown, and the idea that computing would become a centrally managed utility service was considered laughable by many big IT companies and their customers. Back then, in 2005, I wrote an article for MIT’s Sloan Management Review titled “The End of Corporate Computing” in which I argued that computing was fated to become a utility, with big, central data centers feeding services to customers over the internet’s grid. (The article inspired my 2008 book The Big Switch.) I got plenty of things wrong in the article, but I think the ensuing ten years have shown that the piece was fundamentally on target in predicting the rise of what we now call the cloud. So here, to mark the tenth birthday of the article, is the full text of “The End of Corporate Computing.”

Something happened in the first years of the 20th century that would have seemed unthinkable just a few decades earlier: Manufacturers began to shut down and dismantle their waterwheels, steam engines and electric generators. Since the beginning of the Industrial Age, mills and factories had had no choice but to maintain private power plants to run their machinery — power generation was a seemingly intrinsic part of doing business — but as the new century dawned, an alternative was emerging. Dozens of fledgling electricity producers were erecting central generating stations and using a network of wires to distribute their power to distant customers. Manufacturers no longer had to run their own dynamos; they could simply buy the electricity they needed, as they required it, from the new suppliers. Power generation was being transformed from a corporate function into a utility.

Now, almost exactly a century later, history is repeating itself. The most important commercial development of the last 50 years — information technology — is undergoing a similar transformation. It, too, is beginning an inexorable shift from being an asset that companies own — in the form of computers, software and myriad related components —to being a service that they purchase from utility providers. Few in the business world have contemplated the full magnitude of this change or its far-reaching consequences. To date, popular discussions of utility computing have rarely progressed beyond a recitation of IT vendors’ marketing slogans, laden with opaque terms like “autonomic systems,” “server virtualization” and “service-oriented architecture” [1]. Rather than illuminate the future, such gobbledygook has only obscured it.

The prevailing rhetoric is, moreover, too conservative. It assumes that the existing model of IT supply and use — and the corporate data center that lies at its core — will endure. But that view is perilously short-sighted. The traditional model’s economic foundation is already crumbling, and it is unlikely to survive in the long run. As the earlier transformation of electricity supply suggests, IT’s shift from a fragmented capital asset to a centralized utility service will be a momentous one. It will overturn strategic and operating assumptions, alter industrial economics, upset markets, and pose daunting challenges to every user and vendor. The history of the commercial application of information technology has been characterized by astounding leaps, but nothing that has come before — not even the introduction of the personal computer or the opening of the Internet — will match the upheaval that lies just over the horizon.

From Asset to Expense

Information technology, like steam power and electricity before it, is what economists call a general-purpose technology [2]. It is used by all sorts of companies to do all kinds of things, and it brings widespread and fundamental changes to commerce and society. Because of its broad application, a general-purpose technology offers the potential for considerable economies of scale if its supply can be consolidated. But those economies can take a long time to be fully appreciated and even longer to be comprehensively exploited. During the early stages in the development of a general-purpose technology, when there are few technical standards and no broad distribution network, the technology is impossible to furnish centrally. By necessity its supply is fragmented. Individual companies have to purchase the various components required to use the technology, house those parts on site, meld them into a working system and hire a staff of specialists to maintain them.

Such fragmentation of supply is inherently wasteful. It forces large capital investments and heavy fixed costs on firms, and it leads to redundant expenditures and high levels of overcapacity, both in the technology itself and in the labor force operating it. The situation is ideal for the suppliers of the components of the technology — they reap the benefits of overinvestment — but it is ultimately unsustainable. As the technology matures and central distribution becomes possible, large-scale utility suppliers arise to displace the private providers. Although companies may take years to abandon their proprietary supply operations (and all the sunk costs they represent), the savings offered by utilities eventually become too compelling to resist, even for the largest enterprises. Abandoning the old model becomes a competitive necessity.

The evolution of electricity supply provides a clear model of this process. When the commercial production of electricity became possible around 1880, many small utility suppliers quickly popped up in urban areas. These were largely mom-and-pop operations that used tiny coal-fired dynamos to generate modest amounts of power. The electricity they produced was in the form of direct current, which could not be transmitted far, so their service distance was limited to about a mile. And their high-cost operations forced them to charge steep prices, so their customers were generally restricted to prosperous stores and offices, wealthy homeowners, and municipal agencies, all of which used the electricity mainly for lighting.

For large industrial concerns, relying on these small central stations was not an option. To produce the great quantities of reliable electricity needed to run their plants, they had no choice but to build their own dynamos. They would contract with electrical supply houses like General Electric and Westinghouse to provide the components of on-site generators as well as the expertise and personnel needed to construct them, and they would hire electrical engineers and other specialists to operate the complex equipment and meld it with their production processes. During the early years of electrification, privately owned dynamos quickly came to dominate. By 1902, 50,000 private generating plants had been built in the United States, far outstripping the 3,600 central stations run by utilities [3]. By 1907, factories were producing about 60% of all the electricity used in the country [4].

But even as big manufacturers rushed to set up in-house generators, some small industrial concerns, such as urban printing shops, were taking a different route. They couldn’t afford to build generators and hire workers to maintain them, so they had to rely on nearby central stations, even if that meant paying high per-kilowatt rates and enduring frequent disruptions in supply. At the time, these small manufacturers must have felt like laggards in the race to electrification, forced to adopt a seemingly inferior supply model in order to tap into the productivity gains of electric power. As it turned out, they were the vanguard. Soon, even their largest counterparts would be following their lead, drawn by the increasingly obvious advantages of purchasing electricity from outside suppliers.

A series of technical advances set the stage for that shift. First, massive thermal turbines were developed, offering the potential for much greater economies of scale. Second, the introduction of alternating current allowed power to be transmitted over great distances, expanding the sets of customers that central plants could serve. Third, converters were created that enabled utilities to switch between different forms of current, allowing old equipment to be incorporated into the new system. Finally, electric motors capable of operating on alternating current were invented, enabling factories to tap into the emerging electric grid to run their machines. As early as 1900, all the technological pieces were in place to centralize the supply of power to manufacturers and to render obsolete their isolated power plants [5].

But technical progress was not enough. To overturn the status quo, a business visionary was needed, someone able to see how the combination of technological, market and economic trends could lead to an entirely new model of utility supply. That person arrived in the form of a bespectacled English bookkeeper named Samuel Insull. Infatuated by electricity, Insull had emigrated to New York in 1880 and soon became Thomas Edison’s most trusted advisor, helping the famous inventor expand his business empire. But Insull’s greatest achievement came after he left Edison’s employ, in 1892, when he moved to Chicago to assume the presidency of a small, independent power producer with three central stations and just 5,000 customers. In less than 25 years, he would turn that little company into one of the country’s largest enterprises, a giant monopolistic utility named Commonwealth Edison.

Insull was the first to realize that, by capitalizing on new technologies to consolidate generating capacity, centralized utilities could fulfill the power demands of even the largest factories. Moreover, utilities’ superior economies of scale, combined with their ability to spread demand across many users and thus achieve higher capacity-utilization rates, would enable them to provide much cheaper electricity than the manufacturers could achieve with their private, sub-scale dynamos. Insull acted aggressively on his insight, buying up small utilities throughout Chicago and installing mammoth 5,000-kilowatt generators in his own plants. Just as important, he pioneered electricity metering and variable pricing, which enabled him to slash the rates charged to big users and further smooth demand. Finally, he launched an elaborate marketing campaign to convince manufacturers that they would be better off shutting down their generators and buying electricity from his utility [6].

Insull’s vision became reality, as Chicago manufacturers flocked to his company. In 1908, a reporter for Electrical World and Engineer noted that, “although isolated plants are still numerous in Chicago, they were never so hard pressed by central station service as now. . . . The Commonwealth Edison Company has among its customers establishments formerly run by some of the largest isolated plants in the city” [7]. The tipping point had arrived. Although many manufacturers would continue to produce their own electricity for years, the transition from private to utility power was under way. Between 1907 and 1920, utilities’ share of total U.S. electricity production jumped from 40% to 70%; by 1930, it had reached 80% [8].

By turning electricity from a complex asset into a routine variable expense, manufacturers reduced their fixed costs and freed up capital for more productive purposes. At the same time, they were able to trim their corporate staffs, temper the risk of technology obsolescence and malfunction, and relieve their managers of a major source of distraction. Once unimaginable, the broad adoption of utility power had become inevitable. The private power plant was obsolete.

IT’s Transformation Begins

Of course, all historical analogies have their limits, and information technology differs from electricity in many important ways. As a commodity, information shares little in common with electric current. But there are deep similarities as well — similarities that are easy for modern-day observers to overlook. Today, people see electricity as a “simple” utility, a standardized and unremarkable current that comes safely and predictably through sockets in walls. The innumerable applications of electric power, from table lamps in homes to machine tools on assembly lines, have become so commonplace that we no longer consider them to be elements of the underlying technology — they’ve taken on separate, familiar lives of their own. But it wasn’t always so.

When electrification began, it was a complex, unpredictable and largely untamed force that changed almost everything it touched. Its application layer, to borrow a modern term, was as much a part of the technology as the dynamos, the power lines and the current itself. All companies had to figure out how to apply electricity to their own businesses, often making sweeping changes to longstanding practices, work flows and organizational structures. And as the technology advanced, they had to struggle with old and often incompatible equipment — the “legacy systems” that can impede progress.

As a business resource, or input, information technology today certainly looks a lot like electric power did at the start of the last century. Companies go to vendors to purchase various components — computers, storage drives, network switches and all sorts of software — and cobble them together into complex information-processing plants, or data centers, that they house within their own walls. They hire specialists to maintain the plants, and they often bring in outside consultants to solve particularly thorny problems. Their executives are routinely sidetracked from their real business — manufacturing automobiles, for instance, and selling them at a profit — by the need to keep their company’s private IT infrastructure running smoothly.

The creation of tens of thousands of independent data centers, all using virtually the same hardware and, for the most part, running similar software, has imposed severe penalties on individual firms as well as the broader economy [9]. It has led to the overbuilding of IT assets, resulting in extraordinarily low levels of capacity utilization. One recent study of six corporate data centers revealed that most of their 1,000 servers were using just 10% to 35% of their available processing power [10]. Desktop computers fare even worse, with IBM estimating average capacity utilization rates of just 5% [11]. Gartner Inc., the research consultancy based in Stamford, Connecticut, suggests that between 50% and 60% of a typical company’s data storage capacity is wasted [12].

Overcapacity is by no means limited to hardware. Because software applications are highly scalable — able, in other words, to serve additional users at little or no incremental cost — redundant installations of common programs also create acute diseconomies, in both upfront expenditures and ongoing costs and fees. The replication, from company to company, of IT departments with largely interchangeable skills represents an overinvestment in labor as well. According to a 2003 survey, about 60% of the average company’s IT staffing budget goes to routine support and maintenance. [13]

When overcapacity is combined with redundant functionality, the conditions are ripe for a shift to centralized supply. Yet companies continue to invest large sums in maintaining and even expanding their private, subscale data centers. Why? For the same reason that manufacturers continued to install private electric generators during the early decades of the 20th century: because of the lack of a viable, large-scale utility model. But the emergence of that model is well under way. Rudimentary forms of utility computing are proliferating today, and many companies are moving quickly to capitalize on them. Some are using the vast data centers maintained by vendors like IBM, Hewlett-Packard and Electronic Data Systems to supplement or provide an emergency backup to their own hardware. Others are tapping into applications that run on the computers of distant software suppliers. Such hosted applications, ranging from a transportation management application (from LeanLogistics of Holland, Michigan) to an airline reservation system (from Amadeus, headquartered in Madrid) to a customer-service program (from RightNow Technologies of Bozeman, Montana), demonstrate that even very complex applications can be supplied as utility services over the Internet.

What these early efforts don’t show is the full extent and power of a true utility model. Today’s piecemeal utility services exist as inputs into traditional data centers; individual companies still have to connect them with their old hardware and software. Indeed, firms often forgo otherwise attractive utility services because the required integration with their legacy systems is too difficult. Only when an outside supplier takes responsibility for delivering all of a company’s IT requirements, from data processing to storage to applications, will true utility computing have arrived. The utility model requires that ownership of the assets that have traditionally resided inside widely dispersed data centers be consolidated and transferred to utilities.

That process will take years to unfold, but the technological building blocks are already moving into place. Here, three advances — virtualization, grid computing, and Web services — are of particular importance, although their significance has often been obscured by the arcane terminology used to describe them. These three technologies play, in different ways, a role similar to that of the early current converters: They enable a large, tightly integrated system to be constructed out of heterogeneous and previously incompatible components. Virtualization erases the differences between proprietary computing platforms, enabling applications designed to run on one operating system to be deployed elsewhere. Grid computing allows large numbers of hardware components, such as servers or disk drives, to effectively act as a single device, pooling their capacity and allocating it automatically to different jobs. Web services standardize the interfaces between applications, turning them into Lego-like modules that can be assembled and disassembled easily.

Individually all these technologies are interesting, but in combination they become truly revolutionary. Together with high-capacity, fiber-optic communication networks, they can turn a fragmented, unwieldy set of hardware and software components into a single, flexible infrastructure that numerous companies can share and deploy, each in a different way. And as the number of users served by a system goes up, its demand load becomes more balanced, its capacity utilization rate rises and its economies of scale expand. Given that these technologies will evolve and advance, while new and related ones emerge, the ability to provide IT as a utility — and the economic incentives for doing so — will only continue to grow.

The biggest impediment to utility computing will not be technological but attitudinal. As in the shift to centralized electrical power, the prime obstacle will be entrenched management assumptions and the traditional practices and past investments on which they are founded. Large companies will pull the plug on their data centers only after the reliability, stability and benefits of IT utilities have been clearly established. For that to occur, a modern-day Samuel Insull needs to arrive with a clear vision of how the IT utility business will operate as well as the imagination and wherewithal to make it happen. Like his predecessor, this new visionary will build highly efficient, large-scale IT plants, weave together sophisticated metering and pricing systems, and offer attractive and flexible sets of services tailored to diverse clients [14]. And he will make a compelling marketing case to corporate executives, demonstrating that the centralized management of previously dispersed resources not only cuts costs and frees up capital, but also improves security, enhances flexibility and reduces risk. He will, in short, invent an industry.

The Shape of a New Industry

Exactly what that industry will look like remains to be seen, but it’s possible to envision its contours. It will likely have three major components. At the center will be the IT utilities themselves — big companies that will maintain core computing resources in central plants and distribute them to end users. Serving the utilities will be a diverse array of component suppliers — the makers of computers, storage units, networking gear, operating and utility software, and applications. And finally, large network operators will maintain the ultra-high-capacity data-communication lines needed for the system to work. Some companies will no doubt try to operate simultaneously in more than one of these categories.

What’s particularly striking about this model is that it reveals the unique characteristics that make IT especially well-suited to becoming a utility service. With electricity, only the basic generation function can be centralized, and the applications are delivered physically, through motors, light bulbs and other electronic devices that have to be provisioned locally, at the user’s site. With IT, the immediate applications take the form of software, which can be run remotely by a utility or one of its suppliers. Even applications customized to a single customer can be housed at a supplier’s site. The end user only needs to maintain various input and output devices — monitors, printers, keyboards, scanners, portable devices, sensors and the like — necessary to receive, transmit and manipulate data, and, as necessary, reconfigure the package of services received. Some customers may well choose to run certain applications locally, but utilities will be able to own and operate the bulk of the hardware and software, further magnifying their scale advantages.

Which companies will emerge as the new IT utilities? At least four possibilities exist. First are the big traditional makers of enterprise computing hardware that have deep experience in setting up and running complex business systems — companies like IBM, Hewlett-Packard and Sun Microsystems, all of which, not surprisingly, have already been aggressively positioning themselves as suppliers of utility services. Second are various specialized hosting firms, like VeriCenter in Houston or MCI’s Digex subsidiary, that even today are running the entire data centers of some small and mid-sized companies. These specialized firms, which struggled to survive after the dot-com collapse, are beginning to resemble the operators of the original central stations during the early stages of electrification. Third are Internet innovators like Google and even Amazon.com that are building extensive, sophisticated computing networks that could theoretically be adapted to much broader uses [15]. Finally, there are the as-yet-unknown startups that could emerge with ingenious new strategies. Because the utility industry will be scale-driven and capital-intensive, size and focus will be critical to success, and any company will find it difficult to dominate while also pursuing other business goals.

To date, utility computing seems to be following the pattern of disruptive innovation defined by Clayton Christensen of the Harvard Business School — initially gaining traction at the low end of the market before ultimately emerging as the dominant supply model [16]. As such, it may pose grave threats to some of today’s most successful component suppliers, particularly companies like Microsoft, Dell, Oracle and SAP that have thrived by selling directly to corporations. The utility model promises to isolate these vendors from the end users, forcing them to sell their products and services to or through big, centralized utilities, which will have significantly greater bargaining power. Most of the broadly used components, from computers to operating systems to complex “enterprise applications” that automate common business processes, will likely be purchased as cheap, generic commodities [17].

Of course, today’s leading component suppliers have considerable market power and management savvy, and they have time to adapt their strategies as the evolution of the utility model proceeds. Some of them may end up trying to forward-integrate into the utility business itself, a move that has good precedent. When manufacturers began to purchase electricity from utilities, the two largest vendors of generators and associated components, General Electric and Westinghouse, expanded aggressively into that business, buying ownership stakes in many electric utilities. As early as 1895, GE had investments totaling more than $59 million in utilities across the United States and Europe [18].

But that precedent also reveals the dangers of such consolidation moves, for buyers and sellers alike. As the U.S. electricity business became increasingly concentrated in the hands of a few companies, the government, fearful of private monopoly control over such a critical resource, stepped in to impose greater restrictions on the industry. The components of IT are more diverse, but the possibility that a few companies will seize excessive control over the infrastructure remains a concern. Not only would monopolization lead to higher costs for end users, it might also retard the pace of innovation, to the detriment of many. Clearly, maintaining a strong degree of competition among both utilities and component suppliers will be essential to a healthy and productive IT sector in the coming years.

The View from the Future

Any prediction about the future, particularly one involving the pace and direction of technological progress, is speculative, and the scenario laid out here is no exception. But if technological advances are often unforeseeable, the economic and market forces that guide the evolution of business generally play out in logical and consistent ways. The history of commerce has repeatedly shown that redundant investment and fragmented capacity provide strong incentives for centralizing supply. And advances in computing and networking have allowed information technology to operate in an increasingly “virtual” fashion, with ever greater distances between the site of the underlying technological assets and the point at which people access, interpret and manipulate the information. Given this trend, radical changes in corporate IT appear all but inevitable.

Sometimes, the biggest business transformations seem inconceivable even as they’re occurring. Today when people look back at the supply of power in business, they see an evolution that unfolded with a clear and inevitable logic. It’s easy to discern that the practice of individual companies building and maintaining proprietary power plants was a transitory phenomenon, an artifact of necessity that never made much sense economically. From that viewpoint, electricity had to become a utility. But what seems obvious now must have seemed far-fetched, even ludicrous, to the factory owners and managers that had for decades maintained their own sources of power.

Now imagine what future generations will see when they look back at the current time a hundred years hence. Won’t the private data center seem just as transitory a phenomenon — just as much a stop-gap measure — as the private dynamo? Won’t the rise of IT utilities seem both natural and necessary? And won’t the way corporate computing is practiced today appear fundamentally illogical — and inherently doomed?

_________________________

[1] There are notable exceptions. See, for example, M.A. Rappa, “The Utility Business Model and the Future of Computing Services,” IBM Systems Journal 43, no. 1 (2004): 32-42; and P.A. Strassmann, “Transforming IT,” Computerworld, Nov. 5, 2001.

[2] The term was introduced in a 1992 paper by T.F. Bresnahan and M. Trajtenberg, later published as “General Purpose Technologies: ‘Engines of Growth’?” Journal of Econometrics 65, no. 1 (1995): 83-108. See also E. Helpman, ed., “General Purpose Technologies and Economic Growth” (Cambridge, Mass.: MIT Press, 1998).

[3] A. Friedlander, “Power and Light: Electricity in the U.S. Energy Infrastructure, 1870-1940” (Reston, Virginia: Corporation for National Research Initiatives, 1996), 51.

[4] D.E. Nye, “Electrifying America: Social Meanings of a New Technology” (Cambridge, Mass.: MIT Press, 1990), 236.

[5] T.P. Hughes, “Networks of Power: Electrification in Western Society, 1880-1930” (Baltimore: Johns Hopkins University Press, 1983), 106-139; and R.B. DuBoff, “Electric Power in American Manufacturing, 1889-1958” (New York: Arno Press, 1979), 42-45.

[6] For more on Insull’s career and accomplishments, see Hughes (1983), 201-226; and H. Evans, “They Made America” (New York: Little, Brown, 2004), 318-333.

[7] “The Systems and Operating Practice of the Commonwealth Edison Company of Chicago,” Electrical World and Engineer 51 (1908): 1023, as quoted in Hughes (1983), 223.

[8] DuBoff (1979), 40.

[9] For a discussion of the homogenization of information technology in business, see N.G. Carr, “Does IT Matter? Information Technology and the Corrosion of Competitive Advantage” (Boston: Harvard Business School Press, 2004).

[10] A. Andrzejak, M. Arlitt and J. Rolia, “Bounding the Resource Savings of Utility Computing Models,” Hewlett Packard Laboratories Working Paper HPL-2002-339, Nov. 27, 2002.

[11] V. Berstis, “Fundamentals of Grid Computing,” IBM Redbooks Paper, 2002.

[12] C. Hildebrand, “Why Squirrels Manage Storage Better than You Do,” Darwin, April 2003.

[13] B. Gomolski, “Gartner 2003 IT Spending and Staffing Survey Results” (Garter Research, 2003).

[14] Effective and standardized metering systems will be as crucial to the formation of large-scale IT utilities as they were to electric utilities, and work in this area is progressing rapidly. See, for example, V. Albaugh and H. Madduri, “The Utility Metering Service of the Universal Management Infrastructure,” IBM Systems Journal 43, no. 1 (2004): 179-189.

[15] Google and Amazon.com already provide utility IT services. Companies draw on Google’s data centers and software to distribute advertisements over the Internet and to add search functions to their corporate Web sites. Amazon, in addition to running its own on-line store, rents its sophisticated retailing platform to other merchants such as Target, JC Penney and Borders.

[16] C.M. Christensen, “The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail” (Boston: Harvard Business School Press, 1997).

[17] It’s telling that today’s vendors of utility IT services, such as hosted applications and remote data centers, have been among the most aggressive adopters of open-source software and other commodity components.

[18] Nye (1990), 170-174.

Image: The Electricity Building, 1893 Columbian Exhibition, Chicago.

10 thoughts on “The end of corporate computing (10th anniversary edition)

  1. JC

    Fantastic piece that made my weekend! So much true and deep. Thank you. Basically this is what Automattic does with WordPress for personal web publishing. Free low entry, and service as utility for moderate price, or at exorbitant prices for real value huge customers. This thing you wrote about is right here, right now, it is not over horizon. Heroku as utility service for web applications, WordPress fir personal publishing. Facebook even tries to embrace all users attention in desperate effort to sell everyone to everyone else, Google more or less succeeding in this way with monopolizing search as vehicle for his ads business. When Google bought Writely I felt it will become coupe de gras sooner or later into back of the Microsoft Office. Ah, Salesforce! This is the end of the corporate computing, its here, not nigh.

  2. michael webster

    Nicholas;

    I consistently return to reading your book “The Big Switch”.

    The one major area you did not explore enough was the rise of the public utility company –sparked literally by the the threat of major fires in the metro areas.

    The public utility model allowed the centralization of hydro to work – public tax money in return for a guarantee on capital invested.

    But, Amazon’s current cloud model does not have this advantage. And so, the analogy to move from private to public power fails

  3. Nick Post author

    Every analogy breaks down at some point, and it’s often at that point that the analogy becomes most interesting, and also most illuminating.

  4. Luke Hughes

    I remember your early predictions clearly. Based on your thinking and the thinking ofThomas Hughes you cite (my Dad) I used to give workshops to clients where I essentially made the high level analogy argument of the coming cloud akin to utilities. How interesting to see it come true, as well as your latest thoughts.
    Luke

  5. Luke Hughes

    Analogies. Completely agree that where they break down is very interesting. The French generals made an analogy to WWI in WW2 which was not to their benefit. Having set up an analogy the most interesting question often is “where does it not hold.” For example, “armor” might have been the answer the French could have come up with.. and Germans did.

  6. rohit

    Hi Nicholas ,
    I have read your all books the Big switch , the shallows and others

    when I update my colleagues that there is a guy who predicted the big cloud wave coming and dismantling -IT practcies , no one believe it was 10 yrs ago you have predicted the Inevitable changes ,

    my question to you is this -because of work related stuff , I need to sit with laptop and be hooked to internet all the 8-9 hours ,so in those hours even offline control on not working with internet and screen helps , but office hours mandate of work practices overrides the benefit

    what do you suggest in this scenario -as I want completely my mind to come out of control of web ,
    i am able to concentrate and read books on kindle -if offline ,but the moment i came in touch with internet -the automated mind takes control and want to surf link to link

    I want to feel -that offline unhooked ,unreachable feeling

    thanks

  7. anthony wilson

    The 2008 crash accelerated this shift. Co-lo data centers were an exploding business during the downturn and one of the only areas where commercial real estate was booming.

    As a user of cloud services for teaching, I found the switch to omni-present computing strange at first, I didn’t even have to take my laptop around anymore. Just a smart phone and access to any networked PC allowed me to work. And I could leverage materials and distribute/share with students in a more proactive way.

    Another major issue in IT centralization is similar to power – the issue of regulation. Centralized power oversight is through types of public utility commissions. Beyond pricing in the emerging IT model, issues of privacy, copyright and ownership all are currently in need of better legal and commercial frameworks.

    And interestingly, at the same time IT centralization is occurring, power decentralization is now happening. Solar and blended green tech are fragmenting the power station utility model. And one of the leaders in self-power generation are data centers. So what would a future decentralization of IT look like?

  8. Lawrence Orsini

    Hello Nicholas,

    I commend you for crafting this well researched and compelling article. I would like to point out something else that has happened over the last 10 years that might give a different perspective. While the utility analogy follows pretty nicely what has happened with the consolidation cloud computing industry, the story is far from over. The utility industry is in the early stages of a significant upheaval and eventual overthrow of their current business models.

    The consolidation of the distributed micro generation you referenced as companies who built their own dynamos to serve their loads were eventually economically compelled to join Com Ed’s central station powerplant scheme but that’s not the end of the story. Over the last 10 years innovations in generation technology have continued marching.

    In this last year Photovoltaic technology has reached price parity with central station – coal fired – power generation in many states of the US. PV will achieve parity across the US in 2016 and the industry will move into a highly disruptive stage of its evolution. I’d would like to point out, preemptively, that there are several classes of distributed generation technologies that are reaching this point as well. Most notably wind, fuel cells and traditional combined heat and power CHP nearing if not at parity.

    You can draw the same evolutionary parallels to the telecom industry as it consolidated, regulated, deregulated and is now in an early if not hyper competitive stage of consolidation once again.

    My prediction of the future for the cloud computing/services industry is technology evolution and alternative business models will destabilize the industry much faster than utilities and telecom. Price points for the service are already driven largely by energy and data transmission costs and the inefficiencies inherent with transmitting anything, data included, long distances will eventually give rise to a robust, distributed, peer to peer network of computing that will drive speed and efficiencies that centralized computing can’t physically match.

    The distributed economy is coming for cloud computing, just like it is arriving for electricity generation.

  9. Nick Post author

    The distributed economy is coming for cloud computing, just like it is arriving for electricity generation.

    I’m not so sure, though I don’t discount the possibility. Electric current is essentially the same whether it comes to you from a central coal-fired generating station or an array of solar panels on your roof. Information is different, because being able to exchange it easily (whether in a social network or a supply chain) creates additional value that goes beyond efficiencies in generation and distribution. Much of the attractiveness of big cloud services like Facebook or Salesforce derives from the ease of sharing data through the networks, rather than just the efficiency of central data processing. And, of course, the centralization of data collection and storage also creates big advantages for the “utilities” that go beyond generation and distribution efficiencies — eg, understanding customer behavior, tailoring ads and other services, and gaining customer lock-in. In other words, the apparent new trend toward decentralized power supply may not hold much analogical value in understanding the future of computing. That said, you may be right that “a robust, distributed, peer to peer network of computing” will arise that proves even more efficient than the central data-center model while also preserving the benefits of easy sharing of data. But getting from where we are, considering both entrenched business interests and consumer inertia, to that new model will require enormous changes, and I’m finding it hard to foresee the impetus for those changes.

    Nick

  10. Lawrence Orsini

    Nick-

    It’s more about the economics, not the actual product. Only 30-35% of the energy consumed at a central station powerplant actually generates electricity, the rest is lost as waste heat. Between 7-10% of that electricity is lost during transmission from a central station to the load. These are the base economics driving utility decentralization and surge in distributed generation.

    Up to 60% of the energy used in cloud computing is consumed refrigerating waste heat out of datacenters and into the atmosphere, not crunching data. An estimated 14% of the energy used in cloud computing is consumed transferring data between a datacenter and a customer, with largest variable in that equation being distance. Add those up and there is a staggering amount of energy (3-4% of total US electricity) consumed in the central station datacenter model; the 21% cloud compute profit margin you mentioned will make this market ripe for decentralization.

    Distributing the computation to building with heat loads as well as local or very close compute loads will eventually win the economic day – as they are with electricity. Energy is the key variable in cloud computing economics, or very soon will be, so recovering heat and reducing transmission costs will eventually challenge the cloud computing business model. Technology costs are following Moore’s law and manufacturing efficiencies are commoditizing compute components. There is a rapidly growing market segment for computing that is looking for a Henry Ford to deliver a ‘Model T’ compute appliance that takes the power, utility and revenues of cloud computing and puts it in the every man’s hands.

    Uber and Lyft have already shown us what the every man will do if he is presented with the potential to make revenue from a commodity. Air B&B has done the same with our homes. You can bet the co\loud computing industry will be driven by the same economics and will eventually be forced put their compute on a distributed network that provides heat for an existing load and the utility of cloud services – right where the users live.

    Just ask ConEd’s 40 person ‘utility of the future’ think tank, they’ll tell you exactly how this is going to go down.

    Lawrence

Comments are closed.