It’s nice to be an enterprise software company. You can start charging your customers for your product long before they actually begin using it, and when (or if) they do get it up and running, you can charge them an ongoing license fee based on a self-serving theory of usage rather than actual usage. Server software, for instance, is typically priced according to the number of processors that the software runs on, regardless of the actual utilization of the software. Applications tend to be priced either per processor or per user, again independent of how intensively the software is actually employed.
But the old pricing model is now crumbling, thanks in large measure to virtualization (as well as the introduction of multicore processors). With virtualization, a company can carve up a single server into a lot of “virtual” servers, each of which can run its own operating system and application. Virtualization can dramatically reduce data-center costs, because it enables servers to be run at much higher levels of utilization (so you need far fewer of them). But the shift to virtualization has been slowed by the old software pricing model. If, for instance, you set up five virtual servers on a single, four-processor machine, you may have to pay for four separate per-CPU licenses for each of the five server operating systems and each of the five applications – after all, they’re all theoretically employing the four processors in the underlying server. Those software fees can drastically diminish virtualization’s cost savings, reducing companies’ incentives to embrace the new technology.
Fortunately, though, the marketplace hates inefficiency and will, in time, force software vendors to change their pricing policies. A step in the right direction was taken yesterday by Microsoft, when it announced changes to the way it prices its server software. Under the new policy, which takes effect December 1, users will be charged according to the number of virtual servers running the software, rather than the number of processors on the underlying physical server. So if you set up a virtual server running Windows Server on a four-processor machine, you’ll be charged one license fee rather than four. (Of course, if you set up five virtual Windows servers on that same machine, you’ll get hit with five rather than four license fees – there’s always a catch.) Microsoft’s move will put pressure on other software makers to begin changing their pricing terms as well.
But this is only the beginning. What Microsoft has done is tweak the traditional pricing model, bringing it more in line with the reality of modern computing. It’s just a stop-gap measure – a finger in a cracking dyke. In the end, the old pricing model will need to be abandoned. Virtualization won’t, after all, be confined to individual servers. Ultimately, large networks (or “farms,” or “grids”) of physical servers will be virtualized, with their combined capacity allocated to various applications based on moment-by-moment shifts in demand. The usage of every piece of software, moreover, will be able to be tracked precisely. At that point, all the traditional methods of software pricing – whether per-processor or per-server or per-user – go out the window. Software fees will be based not on generic theories of usage but on actual usage. You’ll have software meters just as you have electricity and gas meters.
Expect software vendors to continue to drag their feet in changing how they charge for their products. They’ll maintain the old pricing model as long as they can – not only because it’s lucrative but because it impedes the shift to true virtual computing. (The full-scale virtualization of corporate data centers is a frightening prospect for most traditional IT suppliers.) But eventually the market will demand that they change. And if they still refuse? Well, that’s why God invented open source.