Yesterday’s feeding frenzy for VMware’s freshly spawned stock may or may not prove to be rational, but, together with today’s news that Citrix is paying a hefty price to buy XenSource, what it clearly shows is that investors have taken notice of a sea change in the IT business: a great deal of money that would traditionally have gone into hardware is now going into software instead. VMware’s and Xen’s virtualization software allows companies to turn a single computer into two or more “virtual” computers, each of which can be doing a different job. Essentially, it allows computer owners to tap into the spare processing capacity of their machines – spare capacity that has traditionally been wasted and that appears to have increased steadily as Moore’s Law has pushed up the power of microprocessors.
Some argue that virtualization will ultimately spur greater purchases of servers and other hardware. In a recent blog post, for example, Sun Microsystems CEO Jonathan Schwartz wrote, “I’d like to go on record saying virtualization is good for the technology industry – which seems to be counterintuitive. The general fear is that technologies like Solaris 10 or VMware that help people squeeze more work from the systems they already own is somehow bad for Sun. In my view, quite the opposite is true. As I said, when we double the speed of our computers, people don’t buy half as many – they tend to buy twice as many.”
Schwartz’s analysis has a basic flaw – he conflates two things, processor speed and processor capacity utilization, that, while related, are different – but that doesn’t necessarily mean his conclusion is wrong. In fact, if you look only at the near term, he may well be right. Virtualization will likely spur purchases of servers – for the simple reason that when companies consolidate their machines they often upgrade their hardware at the same time (to get the full benefits of virtualization).
But what about the longer term? Here, it seems to me, the picture is very different. It’s becoming clear that, for large companies in particular, the consolidation of hardware can take place on a truly vast scale. Siemens Medical Solutions, for instance, used virtualization to increase the capacity utilization of its servers from 4% (yes, you read that right) to about 65%, enabling it to reduce the number of servers it runs by about 90%. Hewlett-Packard, itself a major server supplier, is engaged in a massive consolidation effort of its own, which it expects to reduce the number of servers it runs by nearly a third even as it increases its available computing power by 80%. These are not unusual results, and consolidation ratios will likely continue to expand as virtualization, server, and processor technologies advance. When you consider that we are only at the beginning of a major wave of consolidation that will enable big companies to operate with far fewer computers, it becomes difficult to imagine that we’ll end up with more servers in corporate data centers.
That doesn’t mean that there aren’t new applications of computing that will require substantial increases in processing power. I tend to agree with another Sun executive, CTO Greg Papadopoulos, that we’ll see an increasing demand for high-performance computing that will involve considerable investment in new hardware. But Papadopoulos also points out that the processing demands of mainstream business applications – the meat and potatoes of the enterprise IT market – are not keeping up with the expansion of computing supply delivered by Moore’s Law. In other words, the hardware requirements of most business applications are going down, not up – and consolidation opportunities will only grow.
Moreover, much of the investment in high-performance computing is going into building central, utility data centers – for software-as-a-service applications, computing grids, and storage depots – that themselves displace the need for businesses to buy additional hardware. When a company signs up for, say, Salesforce.com’s CRM service instead of a CRM application from Oracle or Microsoft, it avoids having to buy its own servers. Similarly, when a small business or a school decides to use Google’s Gmail platform to run its email, it can avoid having to buy and run its own email server.
In other words, the consolidation of servers and other gear is not only occurring at the level of the individual company, through the adoption of virtualization and other technologies for improving capacity utilization. Utility computing offers the opportunity for consolidation to occur as well across entire industries and, indeed, the entire economy. While the total demand for computer processing cycles seems certain to continue its inexorable rise, that no longer means that the overall demand for computers needs to rise with it. The hardware business is changing, and investors are wise to take notice.
I’ve been using a lot of VMs lately. It’s *very* clever s/ware. But it’s a slightly awkard way to scope enviroments to run stuff in. A VM is just a big black box of bits – you’re still left with x number of OS instances to re-configure, even if they’re all sharing a physical box.
When you run a VM you to get the ability to snapshot it, roll it back, or copy it. It helps recovering from mistakes, but not much in avoiding them. In many ways, what the VMS OS offered in the 80s was better.
The reason Siemens would have bought so many servers would be to stop applications running on the same server and interfering with each other. It’s because whatever OS couldn’t do this, that the servers were bought.
(The interference problem actually gets worse for big iron apps on VMs. You get multiple guest OSs trying to run resources as if they had exclusive access to them.)
FreeBSD jails, or Solaris Containers, are a cheaper and more maintainable way to let contending apps share servers safely. You can run 1 OS instance, with multiple compartments. FreeBSD’s jails are crude, but still good enough to push the cost of a “virtual server” below £30 a month.
VMs are a transitional thing. They’ll start fading away as OSs get better at scoping and managing resources. Although that’ll probably take a decade or so :-)
I agree whole heartedly that virtualization is going to permanently change the application-to-server ratio in most companies forever. The days that you buy an email application and a dedicated piece of hardware are fading fast. Furthermore, utility computing (especially SaaS) will probably replace much of the need to even deploy virtual machines within an enterprise.
(I have noted that virtualization != utility computing, at least as far as service level automation is concerned.)
However, I would offer that there are certain organizations with applications that may never be willing to have infrastructure exposed to other enterprises, even if they will benefit from the use of utility computing practices. For example, even if the security hurdles can be overcome, is there any way that *politically* the defense and intelligence communities would ever be allowed to host their data / applications in a third party “utility”? Will banks be willing to put customer account applications into EC2/S3–on infrastructure shared with everyone from other banks to the hacker community? I don’t think so.
Utility computing will not be a “one size fits all” world, as you well understand. There will be a combination of SaaS vendors, HaaS vendors, boutique capacity and/or service providers and–yes–private data centers (with many utility computing capabilities) that will complete the vision. All, however, will allow customers to optimize the cost of computing within each of those domains.
None of that derails your thesis, however; the market for individual servers will most certainly be “consolidated”, thanks largely to the successes of VMWare and XenSource/Citrix.
Moving off apps to data cantres and utility computing also sets off a huge demand for hardware. These data centres make huge purchases of new hardware, which in turn offsets or even outmatches the slack due to consilodation od on-premise hardware..
The link for “virtualization != utilization” in my earlier comment is broken. The correct target is:
http://servicelevelautomation.blogspot.com/2006/10/service-virtualization-defined.html
My apologies for not catching the error before posting.
Investors should note stuff, I guess, but:
This projection of a contraction is just a conservative guess, not much more. Truth is that the market for hardware is extremely volatile over relatively short periods of time with demand soaring when (as regularly happens) the next big application is found and then crashing, in between those application-innovations, as the tortoise side of IT engineering catches up and starts systematically implementing order-of-magnitude gains in efficiency.
What is contracting, certainly, is the part of the hardware market for which virtualization is a substitute for a more expensive system of under-utilized, private servers.
What we don’t know yet, though, is whether new applications will be discovered that hit a sweet spot of utility v. cost that, once again, pumps up the hardware market. Actually, we do know….
Intel is one example of a hardware company that’s pouring money into software research aimed at exploiting cheap server cycles. They aren’t just blowing smoke. Computer scientists can see essentially infinite ways to suck up as many of these as they can — the main (finite, solvable) problems that Intel is investing in are about reducing these ideas to practice.
So, investors should anticipate that, yes, virtualization is going to suck the hardware demand away that was formerly generated by today’s software stacks, but they should also understand the high probability that at an unpredictable time in the next few years some new class of applications will arise that spark demand for yet more huge build-out.
Looking at hardware companies, I think investors (who are in it for the long haul rather than trying to time the market) should look at how the firms are taking care of the plants, the quality of their sales efforts, the longevity and flexibility of their hard capital, the degree to which they are bringing extreme energy efficiency concerns to their designs, etc. Nothing changed from last week, in other words.
-t
Oh, just thought of an even more immediate thing than future (researchy) applications:
So, today firms depend on certain stacks and virtualization is going to make those an order of magnitude less expensive to host.
“Aha!” says the I.T. manager, “Do you know how many want-list projects we’ve put off because hardware would have been too expensive?”
-t
Tom,
Good points all. But the fact (and it is a fact) that new applications will emerge that will suck up huge numbers of processor cycles doesn’t mean that those applications will run as they did in the past – fragmented across zillions of privately owned machines running in local data centers. The fact is, basic business processes, from financial management to HR management to supply chain management to distribution management, don’t change that much – they’re about and will always be about processing and keeping track of a lot of fairly routine transactions in various forms – and are not going to suck up a lot more processing cycles in the future. In fact, there’s every reason to believe that they’ll suck up fewer, as enterprise applications are reengineered to be more efficient. Yes, many companies are going to use new, heavy-duty analytical applications, but they’re not going to need to be as broadly installed as the traditional transactional applications. So the bulk of the new, computing-intensive apps (whether used by companies or, more likely, average joes) will probably run in central utility stations on the Net (a la Google’s) and will be broadly shared, hence used much more efficiently than applications generally were in the past. And it’s worth remembering that while Google runs hundreds of thousands, perhaps millions, of servers, it doesn’t buy them from computer companies. It makes them.
Nick
Nick,
You make a brilliant argument but you start from a premise I do not accept:
The fact is, basic business processes, from financial management to HR management to supply chain management to distribution management, don’t change that much – they’re about and will always be about processing and keeping track of a lot of fairly routine transactions in various forms – and are not going to suck up a lot more processing cycles in the future.
May I try out some ideas on you? I am speaking from the perspective of 20 years of programming as a professional (almost to the day!) and as such a person who is currently working on some (modern) software for managing ordinary office transactions.
A lot of legacy systems are poorly served by current forms of virtualization and are likely to become more so. This is a little bit subtle. The legacy systems actually deployed in the world are a mishmash of hardware, different OS’ and different versions of those OS’, different applications and versions of those….. this is a sharp limit on what contemporary efforts at utility computing can do on their own.
For example, I don’t expect Google or anyone else, anytime soon, to start selling a plug-and-play replacement for an old XP box, or an IBM mainframe, or a typical Linux deployment. People who want to substitute virtualized commodity computing services for those physically deployed systems must, by in large, migrate to an entirely new platform. There is not nearly enough meaningful standardization in today’s deployed systems to predict an avalanche migration to next tuesday’s computing cloud.
Next tuesday’s commodity cloud is a new platform and it lacks applications. In fact, the platform lacks even a specification — it is *impossible* to start today at writing applications for the new cloud. So there’s nothing to run on it, none of your existing systems port cleanly to it, and so far it’s being built out by a small number of companies with evident monoplistic or oligarchic intent. Ready to sell off your enterprise’s private hardware yet?
Now, apps will come, sure, but hold on: they’ll largely be delivered over the web. That means the winning apps will exploit the intrinsically globalized XML infrastructure. That was the hype when XML first gained attention. You saw it start to play out when, for example, more browser clients could be counted on to handle XSLT stylesheets. You saw the dawning of light over Marblehead when “the masses” of workman programmers turned on to ajax. The bustle this morning includes elaborations like the semantic web, SOA, and modern Internet messaging, VOIP, etc. Progress on the hard problem of internationalizing software has been profound, thanks to the way XML and Unicode travel hand in hand. The early XML hype is about right and, it relates to “the cloud as platform” this way:
As people find the need to migrate away from applications that have reached end of life, which by in large they must do by starting to use the cloud, they will naturally demand fully modern web applications, using XML-based data models, fully internationalized, etc.
The “customer experience” for the enterprise will be, intially, “the same old” transaction tracking apps, but this time via a browser. But then they’ll enjoy using software that enjoys a global marker and that plays nice with things like SOA and AJAX, getting us a giant step closer to the quest for flexible enterprise IT. But there’s a price:
The web and the XML data models are intrinsically dynamic and strongly “interpretive”. They also have overhead for all the vital but niggling details of, e.g., internationalized text handling. That *can* and therefore *will* be predominately implemented at a cycle-cost of between 10x and 100x the cycle-cost of the legacies being replaced (those XP apps and old IBM mainframe apps). I generally mean a parallizable 10x or 100x, too: you can have the same baseline functionality, plus all the web goodness, plus the same throughput — for 10…100x the cycles.
You’re telling me that boxes have been sucking up power and delivering some single-digit percentage of their cycles — but that experiments in virtualization bump that utilization by one or two orders of magnitude. I’m telling you that those payoffs only occur in closed environments where there is a pre-existing uniformity of platform to virtualize but that, in the open market, people will shift to commodity cluster by migrating from app-to-app, and that they’ll migrate to apps that cost enough that hardware demand will remain, on average, about the same. (There’s a deep reason for this: bleeding edge engineers are always designing for next-generation environments so that their results are well timed.)
You also fret about centralization and about things like Google (essentially) getting into the server-assembly business. Bah. Google has too much cash to know what to do with. The coming change is that successful hardware vendors will look more like suppliers to a Home Depot and less like suppliers to an Ikea. Google is using Ikea-type suppliers and trying to drive the market towards more of a Home Depot deal.
Finally, you fret about centralization in general: I wouldn’t. The market will make mistakes as it learns but over-centralization is very bad for the customer — to fragile. The detailed boundaries of private platform ownership will change but there will – and should – be diverse ownership all up and down the fractal scales of the cloud. Again, it’s the emerging platform definition that will not only enable but encourage this.
-t