Is the server industry doomed?

Companies continue to buy a lot of server computers to run their applications and web sites and perform other routine computing chores. More than 7 million servers were purchased last year, according to estimates released last week by analysts, generating about $50 billion in revenues for server manufacturers like HP, IBM, Dell, Sun, and Fujitsu. The server market is a big one, and it looks fairly healthy at the moment. But that may be an illusion. There are growing indications that the server business is a dead man walking.

The most immediate threat is the twin trends of consolidation and virtualization. To save money, companies are merging their data centers and standardizing the applications they run throughout their businesses. The chemicals giant Bayer, for instance, has been consolidating its IT assets worldwide. In the U.S. alone, it slashed its number of data centers from 42 to 2 in 2002, in the process cutting its server count by more than half, from 1,335 to 615. With more companies embracing server virtualization – the use of software to turn one physical server into many virtual servers – the opportunities for further consolidation will only expand. Last year, for example, Sumitomo Mitsui Bank used virtualization to replace 149 traditional servers with 14 blade servers running VMware virtualization software. Timothy Morgan, editor of IT Jungle, believes that, as companies accelerate the consolidation and virtualization of their computing infrastructures, “the installed base of 20 million to 25 million servers in the world could condense radically, perhaps to as low as 10 million to 15 million machines.”

But even that may, in the long run, be too rosy a scenario.

What if the future brings not simply a rationalized version of traditional computing, with fewer servers used more efficiently, but a fundamentally different version of computing, with little need for brand-name servers whatsoever? In this scenario, the core unit of business computing would not be small, inflexible servers but rather large, flexible computing clusters or grids. These clusters in turn would be built not from traditional branded servers but from cheap, commodity subcomponents – chips, boards, drives, power supplies, and so on – that the grid operators would assemble into tightly networked physical or virtual machines. Many of the functions and features built into today’s branded servers would be taken over by the software running the cluster.

If you want to see a harbinger of this model of computing, just look at Google’s infrastructure. Google doesn’t buy any servers to run its search engine. It buys cheap, commodity components and assembles them itself into vast clusters of computers that it describes as “resembl[ing] mid-range desktop PCs.” The computers run in parallel, using a customized version of the open-source Linux operating system. Google doesn’t have to worry about “server reliability” – one of the main selling points used by server manufacturers – because reliability is ensured by its software, not its hardware. If, say, a processor fails, others pick up the slack until the faulty part is swapped out. What concerns Google is the big cluster and the little subcomponent; it’s moved well beyond the idea of the branded server being the heart of business computing.

Obviously, Google is a unique company with idiosyncratic computing requirements. But its efficient, flexible, networked model of computing looks more and more like the model of the future. As Google engineers Luiz Andre Barroso, Jeffrey Dean and Urs Hoelzle write in their IEEE Micro article Web Search for a Planet, “many applications share the essential traits that allow for a PC-based cluster architecture.” IT expert Paul Strassmann goes further in arguing that Google’s infrastructure serves as a template for the future. “Network-centric systems,” he says, “cannot be built on [traditional] workgroup-centric architecture.” If large, expert-run utility grids supplant subscale corporate data centers as the engines of computing, the need to buy branded servers would evaporate. The highly sophisticated engineers who build and operate the grids would, like Google’s engineers, simply buy cheap subcomponents and use sophisticated software to tie them all together into large-scale computing powerplants. Or, if they wanted to continue to purchase self-contained computer boxes, they’d simply contract with Taiwanese or Chinese suppliers to assemble them to their specifications, cutting out the middlemen and their markups.

Ultimately, we may come to find that the branded server was simply a transitional technology, a stop-gap machine required as the network, or utility, model of computing matured. I recently spoke to the chief executive of a big utility hosting company who expressed amazement that its largest server supplier seemed to be “in denial” about the profound shifts under way in business computing. Maybe it is denial. Or maybe it’s just fear.

UPDATE: See further discussion here.

9 thoughts on “Is the server industry doomed?

  1. Andrew Schmitt

    Sun Micro has made a big bet on building servers optimized for high density environemnts. If small enterprises outsource their hardware, more and more of the server hardware market will look like the big iron installations of Google.

    I think Sun has the right idea but I question the logic of building the strategy around a new CPU (Niagra).

    I agree server hardware will become increasingly commoditized, and the Taiwan/Chinese white box model will prevail.

    It is also increasingly likely that the infrastructure will be located and maintained offshore using lower cost labor.

    http://www.nyquistcapital.com/2005/12/10/sun-wants-to-change-the-planet/

  2. Srinivasan

    A completely illogical argument. You have made a case for server costs coming down -“doomed” is a big word.

    1. Hardware will become cheaper

    2. Software will complement hardware in building resilient infrastructure.

    Server is a transitional technology? In your next post, can you pls post five reasons why a components based architecture is better than a server based one, keeping the money part aside?

    Remember: 1 + 1 = 10 only if the base is 2

  3. vinnie mirchandani

    I do not know if too many organizations can do what Google has done, but it sure allows for a “hardware as a service” model…your vision about utility computing. But just like software incumbents pay lip service to software as a service and it has taken new entrants like salesforce.com to drive that, it may take some new players to offer “hardware as a service”. A new product for Google?

  4. Filip Verhaeghe

    Nick,

    As you mention in this post, the number of servers sold is still enormous. In your slides, you briefly touched upon the fact that most companies still feel the need to own the infrastructure.

    In your book “Does IT Matter?”, you also said:

    “The likelihood of an early investment in a new information technology truly paying off—something of a long shot to begin with, given the risk involved—gets ever slimmer as time goes by. Today, most IT-based competitive advantages simply vanish too quickly to be meaningful.”

    As a result, not many CEO/CIO/IT managers are looking to adopt the Utility Computing model, to avoid the risks of early adoption.

    How can young companies (without the deep pockets and dito marketing budgets) overcome this problem?

    Thanks,

    Filip.

  5. shiv

    The metrics that organizations should consider are

    1. Costs per processing unit,

    2. Volatility of computing demand.

    In the long term, enterprises of the future will go with a limited number of highly consolidated servers running some virtualization software, yielding about 60%-70% in utilization, with additional computing power available on-demand to run complex statistical and marketing programs, provided by none other than OEM vendors like Sun, IBM or other providers using generic hardware.

  6. Liam @ Web 2.5 Blog

    An electric utility provides Watt-hours, a commodity. I apply that to my needs on my premises.

    A computing utility provides logic, a commodity (whether sw or hw). I apply that to my data (a priceless artifact), but on whose premises?

    Must I hand all my data off to the utility, so that I am essentially renting my own data as well as the logic?

    If logic is truly a utility product, why can I not apply it to my needs on my own premises?

  7. West Coast Grid

    Because 1.21 gigaflops just aren’t 1.21 gigawatts

    In his normally insightful blog this week, Nicholas Carr made a rather off-the-wall suggestion. He posits that the server industry is doomed, and as proof he writes about trends he sees happening:

  8. Robert E Spivack

    We are a big believer in virtualization and utility-computing services – we actively provide Virtual Machine / Virtual Server hosting as a key part of our hosted services (www.voicegateway.com).

    However, based on customer feedback and our marketing efforts, our pragmatic view is that the traditional server/datacenter approach will continue to exist and grow in parallel.

    This argument is more akin to the “thin client” versus “thick client” battles of the ’90s. Time has shown that the pendulum continues to swing back and forth.

    Ever since the early ’60’s timesharing systems we have seen this constant battle between distributed, individual resources (terminals, desktops, or servers) and massive centralized systems (timesharing, minicomputers, mainframes, network servers).

    History has show the true market situation is a repeating sine wave alternating between both approaches.

    New technology (in its’ day) such as mini-computers, network computing, thin client enablers (Citrix, Windows Terminal Server, WinCE, “PC terminals”) and centralized computing (virtualization, grid computing, blade servers) act as the catalysts that create the inflection points but the overall curve is a never-ending sine wave.

  9. Expert Texture

    Moratorium on a metaphor?

    Yesterday Dan and I discussed a pet peeve of mine: compute cycles being likened to electricity. This comes up nearly every time someone talks or blogs about utility computing.  The catalyst this time was Nicholas Carr and his piece, Is the Server Indu…

Comments are closed.