Between two worlds

Much of the software that runs businesses today is designed to run on a specific and static hardware setup. Basic operating parameters are hardcoded into the machines, and applications are written with those parameters in mind. Developers assume, as Martin Banks writes today at The Register, “that the operating system they are writing to is stable, that the amount of memory they have available is static, and that the amount of CPU utilization is static.”

Virtualization overturns those assumptions. Indeed, the whole point of virtualization is to replace static physical machines with dynamic virtual ones. By getting rid of hardcoding, you free your machines to be much more flexible and to operate at much higher levels of capacity utilization – to operate as a single system rather than as a bunch of discrete parts.

If companies are to reap the full benefits of virtualization, developers will need to change the way they write code. Banks’s article is a good one for anyone looking to understand the challenges involved. He interviews Sharad Singhal, of HP Labs, who notes that while companies are rushing to embrace virtualization, “many developers are continuing to write code that is not efficient and does not match the environments the code will have to run in. This poses problems for enterprises that are already moving to virtualized environments”:

Being able to exploit the flexibility of virtualization in terms of workload and capacity management is an obvious case in point. “For example,” Singhal said, “such an environment can detect that an application requires more capacity, but the application itself has not been written in a way that can make use of it. On the other hand, capacity may be taken temporarily from an application because a higher priority task requires it, but then that deprived application promptly crashes rather than being able to continue functioning in a degraded manner. What this means is that the development tools we give them are going to have to change over time.”

For some time to come, there’s going to be a tension between virtualized hardware and old-style applications. “Until developers catch up and recognize that they can take advantage of virtualization capabilities,” says Singhal, “the onus is on those doing the virtualization to present to the applications things that look [like] legacy environments.”

The first thing you have to virtualize, in other words, is the past. Only then can you begin to move into the future.

3 thoughts on “Between two worlds

  1. Neil Macehiter

    I certainly don’t dispute that virtualisation poses new challenges. The question is where is the best place to resolve them.

    The same issue is faced with SOA initiatives, where application functionality is being virtualised and used in potentially multiple contexts and where aspects of quality-of-service (performance, security etc) are delegated to the infrastructure, which itself is likely to be virtualised.

    To my mind, policy-based management approaches are the best way to resolve them. As an application developer I should be able to define, rather than code, my appliation resource and QoS requirements through policies which can then be enforced through a combination of the infrastructure and its management tools. This is especially true with service-oriented approaches, where the same service may be invoked by service consumers with very different security requirements (one behind and one external to the firewall for example): should the developer of the service have to understand what those contexts are up front or should policies be derived from the contracts specific to each interaction between provider and consumer?

  2. Vlad Miloushev

    As a software architect, I wholeheartedly agree with Neil’s comment above. I think Sharad Singhal’s quote is a glaring example of the kind of warped thinking that causes most large-scale software projects to fail.

    If you examine any large-scale technical system that works, you will inevitably find that it was built mostly from pre-existing smaller systems that work. This is the natural law of large-scale systems, and it fully applies to software systems.

    What follows from this is that the only practical way to build a new system is by (a) using as many existing components/services as possible, (b) combining these existing components in new and valuable ways, and (c) building new components only where this creates significant value.

    To achieve this, one must ensure that existing components continue to work, in a stable and predictable way, in the context of the new system. For software components, this means giving them the execution environment that they were designed for, and virtualizing their interactions with the outside world.

    The very reason virtualization is so popular is that it helps achieve this exact result. Whenever I virtualize something, it is because I want to allow valuable existing code to work in a new environment “as is”, without having to modify it. Virtual memory, virtual disks and virtual machines are all examples of technology that allows one to reuse functionality in a new environment WITHOUT HAVING TO CHANGE IT.

    To complain that the code which runs in a virtual machine is not aware of the virtualization and needs to change is to display ignorance of the most basic principles of system design.

Comments are closed.