More thoughts on servers

A couple of days ago, I suggested that three trends – the consolidation of corporate IT assets, the spread of hardware virtualization technologies, and the rise of expert-run utility clusters or grids – might in the long run spell doom for the traditional server-computer industry. Consolidation and virtualization would let companies do a lot more with a lot fewer servers, while utilities would have the scale and expertise to take over the computer-assembly function themselves (as Google, for instance, already does). Two bloggers – SAP’s Charles Zedlewski and Sun’s John Clingan – have responded with strong counterarguments.

Both argue that if servers become more efficient (through virtualization, for instance), then companies will tend to buy more of them, not fewer. If a product becomes more valuable, after all, you’ll want more of it. That’s a great point (for unit sales, if not for revenues), though I’m not sure it applies in this case. It’s important to remember that what’s really being consumed is computing cycles, not servers; through consolidation and virtualization companies may both consume a lot more cycles and buy a lot fewer boxes. In fact, that seems to be the case for a lot of the companies that have been most aggressive in consolidating data centers up to now. (Zedlewski argues that we should have already seen a dropoff in server sales since virtualization is now “wildly popular.” I think he’s jumping the gun, though; it’s still early days for virtualization. Server sales were weak in the last quarter, but it’s too soon to know if that’s a trend or a blip.)

Zedlewski also argues against the idea of companies shifting away from status-quo, proprietary IT architectures toward more flexible, multi-tenant ones. He may turn out to be right here as well, but I think he underestimates the economies of scale that the utility model, as it matures, will be able to deliver – not only in hardware costs, but in labor costs, electricity costs, real estate costs, and software costs – as well as its power to free up capital and management time for more strategic purposes. But, anyway, Clingan and Zedlewski are right to point out that there are a lot of contending forces influencing the course of the enterprise IT world today – that’s why it will be so fascinating to watch how things shake out over the next decade or so.

My own sense is that it’s unlikely that the status quo will hold. Frank Sommers, in a comment to Clingan’s post, does a great job of explaining why it’s hard to see tomorrow when you’re looking through today’s eyes:

In his introduction to Patterson/Hennesey’s book, Computer Architecture, Bill Joy recalls a statement made by Maurice Wilkes to the effect that many innovations occur by imagining that something that is not currently true, is already true.

In the context of Carr’s comments, what I’d imagine is that very a reliable high-bandwidth network connection is available anywhere for very cheap. That condition is only partially true today, but suppose that one day that will generally become true.

If that were the case, why would any company not in the IT services business want to run its own data center? I mean, few companies run their own power generators (some do, I admit), mainly because access to the power grid is readily available practically anywhere for a reasonable cost. Wouldn’t it be so much more convenient to just be able to use a thin client to log into a remotely hosted desktop? And with standard application interfaces, such as J2EE, shouldn’t a company’s IT department be able to deploy an enterprise app into a remote data center’s hosting environment? Many companies already do this, but I’m curious why wouldn’t most follow that path (again, imagining that the above condition is already true).

If that were the case, these remote data center services could operate servers with much higher utilization, hence leading to even further consolidation of servers, and hence lower costs not only for servers, but especially for such hosting services. If hosting service costs fall faster than server costs do, then there might be an even bigger economic incentive for enterprises to outsource their data center operations, hence leading to even faster consolidation of applications to servers.

Interestingly, that’s why, I think, Sun’s strategy of throughput-oriented computing is going to pay off. Because data centers will want systems that they can consolidate their customers’ applications into. So [while] it’s true that there will be fewer servers sold, it will be interesting what vendors will gain market share and remain standing.

It will, indeed.

8 thoughts on “More thoughts on servers

  1. Adrian Cockcroft

    The supply and demand mechanism that drives the volume of the computer/server market has historically increased both unit volume and dollar volume every time the cost has come down significantly.

    The reason is that a lower cost base unlocks latent demand. There are services that are not economically viable, so their market volume is near zero, then a large drop in costs causes explosive growth and forms a new market. Often the new market ends up bigger than the existing market.

    There are many examples, the market for portable personal storage was near zero until flash memory and disk drives got cheap enough to make the “iPod” market viable. There are several obvious latent markets for servers, e.g. the cost/performance of reliable storage web services is currently too high for everyone to have a full backup of their home computers, but thats going to change eventually and sell a lot of servers.

    I worked for Sun from 1988 to 2004, I’ve seen many waves of latent markets emerge as prices dropped and the market expanded.

  2. Sebastian Muschter

    2 quick questions: most of the energy used by the servers is converted into heat. In a data center, even more of the total energy used goes not into servers, but into cooling the place.

    1.) How much energy could be saved by

    > directly cooling the CPUs with a centralized cooling?

    > transforming electrical power centrally and directly down to the level the server need, i.e. no more decentralized “power supplies”?

    Architecturally, that would mean ripping the server into pieces, merging power supplies, bundling CPUs in a central “cool room” etc. – but why not? (In a grid, the bottleneck of LAN transmission speeds can be managed – why not ‘LAN’ing a CPU with its memory?

    2.) Why not locate data centers in Alaska and use some fiber lines into California? Saves on cooling, rents and labor costs…

  3. John Clingan

    This is a healthy discussion, so kudos for initiating it. I think that the formula still holds as Adrian mentions. As (if) the industry moves to compute grids, the overall cost of ownership will be lower, driving yet more demand.

    Sun is trying to nudge the market in this direction through the Sun Grid. Someone at Sun has done the math, and it obviously has a sustainable business model behind it. What I don’t know is how much of the business model’s revenue is generated through servers versus software & services. My gut feel is the revenue curve goes up on all of them.

    FYI, in the Sun Grid, the primary branding (IMHO) is the architecture. There is a tremendous amount of engineering behind the Sun Grid, as there is behind Google’s grid. The primary value is the grid as a whole, not the individual subcomponents. However, there is value in branded subcomonents. If you consider the T2000 as a subcomponent (subcomponent is relative), it has the ability to make the branded grid more flexible and cost effective. There is room left for innovation at the server granularity of innovation.

    Now I’m just rambling ….

  4. Nick

    Adrian and John,

    Thanks. I agree that reductions in the cost of computing will continue to drive up demand for computing. That’s inevitable. The question is, will that translate into higher sales of traditional branded servers? Or is a new model for large-scale, shared computing emerging that isn’t built on traditional branded servers? My own sense is that while the traditional server suited the massively fragmented, massively redundant computing infrastructure that has prevailed for the last few decades, it may not suit the consolidated, networked infrastructure of the future (whatever that infrastructure ends up looking like). Frank Sommers delves into this in his comment. Sebastian goes even further, asking whether the very form of the server will be rendered obsolete in the future. Once you start thinking outside “the box,” will you even need the box anymore?

    Nick

  5. Daniel Ciruli

    Adrian has a great point, and it will be exciting to watch as this happens. The availability of cheap computing power will be more groundbreaking than a lot of the technologies that are being billed as “Web 2.0.”

    Data centers will continue to evolve, and people will continue to explore options of SOA and SaaS. However, as I posted here earlier, the analogy of electric utility and compute utility just doesn’t work. Electrons are all alike, and computes aren’t. Data is hard to move (and, as fast as networks are getting, data is getting larger faster).

    The server market will certainly keep evolving, and the models that make up the market now will be gone in five years. But will the server market be “doomed?” Nope. Just destined to change.

  6. Adrian Cockcroft

    There are many complex and interrelated issues here, but the underlying fundamentals are based on Total Cost of Ownership (TCO), and reliability coupled with the cost of failures. If you can build stateless services where users don’t mind losing a transaction now and again, then you can deploy relatively unreliable systems. You want some diversity in those systems, so that they don’t all fail the same way at the same time with some systemic problem, but managing diversity adds cost, so a more reliable basic system reduces the management cost part of the TCO.

    Andy Bechtolsheim’s design goals for Sun’s Opteron machines are to make them more reliable, more power efficient, more maintainable and faster than its competitors in this space. For example they provide dual power supplies, which helps availability in a typical dual-power-grid datacenter, and they use very efficient power supplies so overall power consumption per server is lower. Andy argues that the total efficiency of his systems measured at the datacenter level is better than the efficiency of the DC power systems being pushed by vendors such as Rackable.

    So will we still have “the box”? The volume server market is still going to be individual boxes, its a convenient chunk for purchase and for failure containment. There is a growing market for blades, which is just a larger box wrapper to put in your own racks, and vendors like Rackable supply a complete rack as the box and are used by some of the large scale deployments (they list Yahoo as a customer). Some big end users build custom high density data centers, effectively an even bigger box, and others buy up cheap low density data centers and leave empty space in the racks.

    Sun’s challenge is to ramp up their Opteron sales fast enough to take over as their main revenue driver. They also need to keep their high end server business going, in the same way as IBM has kept its Mainframe business going. For now, in my opinion the server best of breed seems to be Sun at the system level, IBM at the blade level and Rackable for high density racks.

    I don’t think enough end users really want to build their own machines to make much difference. The existing vendors are setting up their own web service utilities and competing very aggressively to win the big deals at other web service utilities. If you want to build a huge utility, you specify exactly what you want and someone makes custom machines for you. Some of those deals will be done directly with the Asian “White Box” manufacturers but not enough to be a dominant trend.

  7. West Coast Grid

    How’s the generator business?

    Nicholas Carr is vigorously defending his stance (Is the server market doomed?) in his latest post (More thoughts on servers).

    I think he’s still missing the boat.

  8. Charles Zedlewski

    Thanks for taking the time to sum up some of recent posts Nick. I don’t want to beat this topic to death but I do want to correct your characterization of my point of view.

    I my post, I didn’t defend the “status quo,” or defend “proprietary IT” or deride “multi-tenancy.” I just question the economic benefits of leasing computation from a central service versus using purchased servers. I think a shared utility might be 20-30% more efficient but that would be offset by the contracting costs between the utility and the customer.

    Either way it will be a few years before we know the real story.

Comments are closed.