The techno-utility complex

Phil Wainewright. contemplating the potential concentration of the computing grid into “a handful of huge data centers,” warns against the rise of a “techno-utility complex” in which “vested economic and political interests conspire together to build huge technology-based utility industries that preserve and reinforce their power bases.” He elaborates:

concentration of compute power in the network is in the interests of those who operate the large data centers. Sun, Google, Microsoft and Amazon will lobby hard for it, but who will argue the counter case, against the techno-utility complex? I’m not saying there’s no case for megacenters — they’ll benefit consumers where they provide true economies of scale, especially if they’re sited next to plentiful supplies of hydro or other carbon-neutral energy sources — but my gut feel is that the ideal solution is a hybrid infrastructure, combining megacenters, transportable midi-centers and client-side resources, each performing tasks they’re best suited for and most efficient at.

I don’t think we’re going to see anything like a total centralization of compute power in a small number of data centers. Distributed microprocessors and data stores aren’t going away, and in fact are essential to the effective operation of web apps and other centrally supplied services. On the other hand, I do think we’ll see a steadily increasing concentration of computing assets, compute power, and data storage – not just because it’s in the best economic interest of the utility suppliers but because it’s in the best economic interest of the users as well. Not only does centralization provide economies of scale, in the machinery, the labor force, the real estate, and the electricity and other resources used for computing services, but it provides for a more flexible means of sharing information among both individuals and companies.

But Wainewright is right to point to the risks inherent in the centralization of control over the computing grid. He concludes, “The danger is that allowing the techno-utility complex to get its way will prove contrary to the true interests of society.” The scope of that danger remains uncertain, but I’m convinced that in the coming years the public and its politicians will face some difficult choices as both the risks and the benefits of the centralization of computing power become more apparent.

4 thoughts on “The techno-utility complex

  1. James Urquhart

    I think we are already seeing one of the early signs of this vested interest: cloud lock-in. Amazon EC2’s requirement for usage of S3, Google’s own alleged nefarious use of MySQL, Microsoft’s sure requirement for use of Longhorn/Viridian in their capacity offering, all point to various ways that vendors will offer us really cheap computing with a price; extremely high costs to move off of their platform to another.

    This is exactly why I am preaching the dangers of cloud lock-in, and giving advice to many to convert their own capacity into a utility first while they wait for the legislative and standardization issues to work themselves out.

    The battle has already begun, but the early estatic EC2 crown just hasn’t run into the need or desire to change vendors yet.

  2. Botchagalupe

    IMHO, utility computing is inevitable and can’t be stopped. The real issue is going to be ownership of that last mile. That’s when the legislative and standardization issues are going to get interesting. Even though Amazon has a jump on everyone Google is doing what they do best (lead) by bidding for a wireless spectrum.

  3. Aziz Poonawalla

    I agree that lock in is an issue, but (taking a cue from your new book), the analogy to the centralization of electrical generation at the turn of the century suggests that it’s a natural step in the cycle. Only with massive centralization of power generation (ie. lock in) was the massive industrial progress of the 20th century possible. Perhaps we need a kind of lock in to attain the truly vast computing scales needed to solve information-era problems like: artificial intelligence, prime number prediction/large number factorization, and of course super-modeling of physical phenomena like hurricanes and global climate. The fastest spercomputer clusters today still are more capable than the largest compute clouds of the present day, but that is guaranteed to change.

    Arguably, we are still in lock-in mode for electricity generation today, though with electricity deregulation that is changing. Extrapolate; I imagine that someday every neighborhood might have a local pebble-bed reactor underground next to the communal (bacterially-engineered) septic tank, and people will pump electricity back into the global grid from their hyper efficient solar panels to make some extra cash. But we aren’t there yet, and when we do get there that will open new vistas for personal use of cheap power (keeping all the insane computer gear running on our households will be a start). But we would never have gotten to this point if we’d insisted on avoiding lock-in at teh outset of the industrial revolution.

    Analogously to the compute grid, we need to go through this phase frst to achieve the unlocking of potential of massive amounts of compute power first. Otherwise we simply wont have the ability to even envision what we might do.

    I’m not completely blaise about the risks of lock in, but I don’t see how Amazon or anyone else is going to stop you from taking your data and applications elsewhere if you do decide that someone else gives you a better offer. Its worth the risk.

  4. James Urquhart

    Sure, vendor lock-in is necessary (in some ways both technologically as well as to make the businesses work). However, this fact does not make the risk of running in a third party cloud right for everyone. I would think that most people would be uncomfortable with their banking data residing on S3–not only because of security, but for the risk of losing control of the data altogether. However, most would be perfectly comfortable if banks implemented utility computing concepts within their own data centers, even if the platform they selected involved some lock-in.

    Similarly, the fantastic approaches of SmugMug and others carving out the “web without a datacenter” world should clearly be pursued. Over time, the lessons learned from these pioneers (again, both technical and economical) will result in stronger offerings from capacity vendors, with eventual offerings strong enough for even the financial industry.

    The big difference between power and compute capacity is that there is no value to you in the electricity I use–a volt is a volt is a volt. However, you may find plenty of value in the data I own or the algorithms I rely on. I have to be much more careful sharing a compute grid with others than I have to be sharing the power grid, or even the water supply.

    (Interestingly enough, one type of data is valuable in both cases–statistics about the actual use of the utility. Law enforcement loves knowing about homes that consume much more power than their neighbors–easy pickings for the narcotics squad.)

    I agree that there will be similarities between the growth of the compute capacity utility and the power utility, but you can take the analogy too far. There are critical technical and economic differences that have to be taken into account.

Comments are closed.