Monthly Archives: March 2007

Dell heads for the cloud

Dell hasn’t yet rolled out its trailer computer, but this week it did make clear that it aims to compete with the likes of Sun, IBM and Rackable in supplying the heavy metal for the big-ass data centers of the future. On Tuesday, the company announced that it was setting up a Data Center Solutions unit to serve Internet companies and other firms that “are building out fleets of servers with power, thermal, and density issues of such a massive scale that they invite new approaches for optimization.” The new unit’s debut offering is dubbed the “Cloud Computing Solution,” which VP Forrest Norrod calls “a computing solution for ‘Hyperscale’ data center environments including ‘design-to-order’ hardware, planning and deployment expertise and customized services targeted to address key needs generally not met by traditional hardware vendors.”

Although Dell claims that the new “design-to-order” business is a logical “next step” from its traditional “build-to-order” operation, it’s hard to see much of a connection between the two. Designing and outfitting customized high-end data centers is about as far removed from cranking out cheap boxes as you can get. This is another clear sign that Dell is abandoning the old “don’t innovate; commoditize” mantra that served it so well for so many years. Having watched competitors like Hewlett-Packard eat away at its cost advantage in the generic PC and server markets, Dell is now trying to follow other hardware makers up-market into more specialized (and profitable) machines and services. That will require bigger investments in R&D and, in turn, a different kind of cost structure – as well as a new and very different positioning and image in the enterprise market. It’s a tough challenge.

There’s also the larger question of how big the “hyperscale computing” market will turn out to be. Will there be a lot of companies building next-generation data centers, or just a few giant Google-like utilities with a lot of purchasing clout (and a lot of homegrown engineering talent)? The supply side’s already getting crowded, and the demand side is still pretty, uh, cloudy.

I do have one recommendation for Dell, though. When it gets around to introducing its trailer computer, it should let the guys at its Alienware subsidiary design the sucker. I would like to see some Alienware trailers barreling down the highway.

UPDATE: Adding to Dell’s challenges, the company announced today, after the stock market closed, that an internal investigation into accounting irregularities revealed “a number of accounting errors, evidence of misconduct, and deficiencies in the financial control environment.”

UPDATE: Larry Dignan follows up with an interview with Forrest Norrod. Norrod doesn’t get into details, but he does give an intriguing hint that in the end this may turn out to be another commoditization play for Dell, as the design of “hyperscale” data centers evolves toward a few standard setups:

Here’s the [current] process: Big customer comes to Dell for data center services and gives the company its specification and infrastructure plans. From there, the customer’s plan is discussed to address everything from power supplies to processor requirements to cooling to software to network capabilities. “We go through discovery of requirements and constraints,” said Norrod. “We come back in about a month with a [custom hardware configuration]” …

The big question is whether this model could scale for Dell. Norrod acknowledges that there are “limits to scalability,” but over time these [hardware configurations] could be mass produced. Dell’s [Cloud Computing System] customers currently all have different solutions to data center design, but some commonality is emerging. For instance, a large Web company that is a Dell customer cooked up a data center design that looked a lot like what a financial services firm was attempting. “Today the equipment is relatively diverse, but there is some commonality. Classes of applications and infrastructure philosophies will wind up being common for approaches,” said Norrod.

Of course, isn’t that what Sun and Rackable are already doing with their trailers – creating common hardware components at the data-center level rather than the server level?

Showdown in the trailer park II

Greg Linden points to an excellent paper by Microsoft’s James Hamilton that puts trailer park computing into the broader context of the evolution of the data center in the emerging age of utility computing. Hamilton presented his paper at the Biennial Conference on Innovative Data Systems Research in January (PowerPoint slides also available). Hamilton’s ideas provide a complement and counterpoint to a presentation on the future of computing demand that Sun’s Greg Papadopoulos gave in February (pdf of slides).

Both Hamilton and Papadopoulos see a similar set of trends – notably, the growing popularity of software as a service, the growing demand for supercomputing in certain industries like pharmaceuticals and financial services, and the expansion of computer-intensive “Web 2.0” applications aimed at consumers – spurring demand for extremely powerful, extremely reliable, and extremely efficient data centers. The big question is: How will this demand be filled? What will turn out to be the default architecture for the utility data center? Will it be constructed out of cheap commodity components (a la Google and Amazon), or will it be constructed out of specialized utility-class machinery supplied by companies like Sun, Rackable, and 3PAR? (Google itself is rumored to have its own fleet of homemade containerized data centers.)

Papadopoulos, true to his Sun heritage, argues for the specialized model, while Hamilton, true to his Microsoft heritage, argues for the commodity model. But for the commodity model to work, contends Hamilton, the commodity components will need to be combined into a meticulously engineered system that achieves high power, reliability, and efficiency – a data center in a box, or, more precisely, a data center in a container. The portability of a containerized data center, Hamilton further argues, will allow for a utility operation to be based on a network of smaller data centers, deployed in many different places, which he believes will offer social and political advantages over large, monolithic centers. As he puts it:

Commodity systems substantially reduce the cost of server-side computing. However, they bring new design challenges, some technical and some not. The technical issues include power consumption, heat density limits, communications latencies, multi-thousand-system administration costs, and efficient failure management with large server counts. The natural tendency is towards building a small number of very large data centers and this is where some of the non-technical issues come to play. These include social and political constraints on data center location, in addition to taxation and economic factors. All diminish the appeal of the large, central data center model. Multiple smaller data centers, regionally located, could prove to be a competitive advantage.

To address these technical and non-technical challenges, we recommend a different granule of system purchase, deployment, and management … we propose using a fully-populated shipping container as the data-center capitalization, management, and growth unit. We argue that this fundamental change in system packaging can drive order-of-magnitude cost reductions, and allow faster, more nimble deployments and upgrades.

As Hamilton notes, the shift to viewing an entire data center as a “unit” of computing would be consistent with the ongoing shift from buying individual servers to buying racks of servers. He makes a good case that the standard shipping container provides the ideal form factor for this new computing unit:

Shipping containers are ideal for the proposed solution: they are relatively inexpensive and environmentally robust. They can be purchased new for $1,950 each, while remanufactured units range around $1,500. The units are designed to successfully transport delicate goods in extreme conditions and routinely spend weeks at a time on the decks of cargo ships in rough ocean conditions and survive severe storms. More importantly, they are recognized world-wide: every intermodal dispatch center has appropriate cargo handling equipment, a heavy duty fork lift is able to safely move them, and they can be placed and fit in almost anywhere. The container can be cost-effectively shipped over the highway or across oceans. Final delivery to a data center is simple using rail or commercial trucking. The container can be placed in any secure location with network capacity, chilled water, and power and it will run without hardware maintenance for its service life …

At the end of its service life, the container is returned to the supplier for recycling. This service life will typically be 3 years, although we suspect that this may stretch out in some cases to 5 years since little motivation exists to be on the latest technology.

If Papadopoulos and Hamilton take different views of the kinds of components – commoditized or specialized – that will be assembled into these boxes, they seem to be in agreement that what will be important is not the component but the entire system. If they’re right, then the the computer industry may, as I’ve suggested before, be going back to more of the mainframe model, where profits lie in high-end engineering rather than low-end assembly.

That remains to be seen. In the meantime, Hamilton’s paper is a must-read for anyone interested in the future of computing.

Showdown in the trailer park

Sun, which has its Blackbox containerized data center out on tour, is suddenly facing some tough competition in the burgeoning trailer park computing market. Rackable Systems is rolling out a copycat product called Concentro that may just outdo the original. Besides sporting a most excellent name – if Flash Gordon had a computer, it would be called Concentro – Rackable’s portable data center comes in a 40-foot shipping container, making Sun’s 20-foot model look downright wimpy, and it can be packed with 9,600 processing cores or 3.5 petabytes of storage. Best of all is the interior. Check it out:

concentro05_lg.jpg

Stick David Bowie in there, and you’ve got your moody sci-fi epic half made.

For more details, see articles by the Register’s Ashlee Vance and IT Jungle’s Timothy Prickett Morgan. Prickett Morgan notes that Concentro comes equipped with LoJack, just in case some meth-addled kid decides to hook it to the back of his Ram Charger and go for a joyride.

Born again again

LifeChurch.tv, the high-tech evangelical church, has opened a 16-acre campus in Second Life, reports New Scientist. Bobby Gruenewald, LifeChurch.tv’s Pastor-Innovation Leader, writes on his blog that the virtual church will feature on-demand video, free virtual t-shirts, a special “LifeKids” area for little avatars, and something called a “Mysecret.tv glass house.” Beyond providing a gathering place for the virtual faithful, LifeChurch.tv hopes to use its presence to help redeem Second Life. Writes Gruenewald: “I need to warn you that there is a huge problem on Second Life with porn and ‘virtual sex.’ It is one of several reasons we are there, but it is also something that you need to be on guard about.” To combat the licentiousness, LifeChurch.tv has invited the “Porn Pastors” from xxxchurch.com to set up a mission within the campus.

The establishment of virtual churches and congregations of avatars would seem to raise some knotty theological questions, which I’m not sure the LifeChurch.tv pastors have fully thought through. In creating virtual worlds, aren’t we usurping God’s role – and hence committing a heresy? Are avatars created in God’s image or our own? Can they be saved? Can they be damned? Does sin even exist in a virtual world? Is there a Second Afterlife?

Oracle v. SAP

I sat down this morning with a cup of coffee or four and read through the 43 pages of Oracle’s lawsuit against SAP. It makes for fascinating reading, but I was disappointed to discover that the alleged skullduggery doesn’t quite live up to the hype of the complaint’s memorable first sentence: “This case is about corporate theft on a grand scale, committed by the largest German software company – a conglomerate known as SAP.” “Grand scale” feels like an overstatement, and despite the hint of corporate jingoism in that opening sentence, Oracle doesn’t present any hard evidence that the scheme went beyond one SAP subsidiary in the very American state of Texas.

The story begins in January 2005, when Oracle completed its acquisition of PeopleSoft, a major supplier of enterprise resource planning (ERP) applications and a big SAP competitor. (PeopleSoft itself had recently acquired another large ERP supplier, J.D. Edwards.) That same month, and in response to the Oracle acquisition, SAP bought TomorrowNow, a small Texas firm set up by former PeopleSoft employees that was in the business of providing support to companies using PeopleSoft programs. Buying TomorrowNow (subsequently renamed SAP TN) allowed SAP to get its foot in the door of some PeopleSoft customers, many of whom were unhappy with PeopleSoft’s merger into Oracle. In addition to getting support revenues from PeopleSoft clients, SAP clearly hoped that it would be able to convince some of them to switch to SAP applications – through what it called its “Safe Passage” program.

TomorrowNow’s central pitch was that it could dramatically reduce the ongoing support and maintenance fees that corporations pay to the vendors of complex ERP applications to keep the systems running. Oracle alleges that the reason TomorrowNow was able to keep its fees so low was that its employees broke into PeopleSoft’s customer support website and downloaded the software and documents required to maintain, troubleshoot, and update PeopleSoft software. In other words, according to the suit, instead of developing its own intellectual property, SAP TN simply stole PeopleSoft’s (and hence Oracle’s). As the suit charges:

It was not clear how SAP TN could offer, as it did on its website and its other materials, “customized ongoing tax and regulatory updates,” “fixes for serious issues,” “full upgrade script support,” and, most remarkably, “30-minute response time, 24x7x365” on software programs for which it had no intellectual property rights. To compound the puzzle, SAP continued to offer this comprehensive support to hundreds of customers at the “cut rate” of 50 cents on the dollar, and purported to add full support for an entirely different product line – Siebel [which Oracle acquired later in 2005] – with a wave of its hand. The economics, and the logic, simply did not add up.

Oracle has now solved this puzzle. To stave off the mounting competitive threat from Oracle, SAP unlawfully accessed and copied Oracle’s Software and Support Materials.

In late 2006, Oracle says it noticed anomalies in certain customers’ use of the PeopleSoft support site. In particular, some customers were clicking through the site with “lightning speed” – indicating that an automated program was being used to rapidly scan and copy the site’s contents. Oracle launched an investigation and soon, it says, “discovered a pattern”:

Frequently, in the month before a customer’s Oracle support expired, a user purporting to be that customer, employing the customer’s log-in credentials, would access Oracle’s system and download large quantities of Software and Support Materials, including dozens, hundreds, or thousands of products beyond the scope of the specific customer’s licensed products and permitted access. Some of these apparent customer users even downloaded materials after their contractual support rights had expired.

Oracle says it traced the suspicious activity to an IP address at TomorrowNow’s headquarters in Bryan, Texas:

Although it is now clear that the customers initially identified by Oracle as engaged in the illegal downloads are SAP TN customers, those customers do not directly appear to have engaged in the download activity; rather, the unlawful download activity observed by Oracle and described here originates directly from SAP’s computer networks. Oracle’s support servers have even received hits from URL addresses in the course of these unlawful downloads with SAP TN directly in the name … The wholesale nature of this unlawful access and downloading was extreme. SAP TN appears to have downloaded virtually every file, in every library that it could find.

Oracle charges that “SAP TN conducted these high-tech raids as [parent company] SAP AG’s agent and instrumentality and as the cornerstone strategy of SAP AG’s highly-publicized Safe Passage program.” But the suit supplies, so far as I could see, no evidence that anyone beyond the TomorrowNow headquarters authorized or knew of the alleged “raids.” Oracle does say it has “concerns that SAP may have enhanced or improved its own software applications offerings using information gleaned from Oracle’s Software and Support Materials,” but, again, the suit itself offers no evidence to back up that charge.

Clearly, if SAP TN did what Oracle claims, it at the very least violated licenses and copyrights (though it’s worth remembering that the materials were contained in a public site open to thousands of Oracle customers). If it turns out that the scheme was a rogue operation carried out by a few TomorrowNow employees, it will cost SAP considerable embarrassment and, likely, some cash. If it turns out that it was part of a larger conspiracy of corporate espionage, and resulted in Oracle intellectual property being incorporated into SAP software, that would be a much, much larger problem for SAP. But, as I noted, we don’t have any clear evidence of the latter scenario.

The suit does raise some other issues. Not least is the apparent ineptitude displayed by both companies. If, as Oracle claims, the copying of the materials on its support site caused it “irreparable injury,” one has to wonder why it was so lackadaisical in protecting the site and the materials. By Oracle’s own admission in the suit, gaining access to all the code and documents on the site seems to have been almost farcically easy:

In many instances, including the ones described above, SAP employees used the log-in IDs of multiple customers, combined with phony user log-in information, to gain access to Oracle’s system under false pretexts … These “customer users” supplied user information (such as user name, email address, and phone number) that did not match the customer at all. In some cases, this user information did not match anything: it was fake. For example, some users logged in with the user names of “xx” “ss” “User” and “NULL.” Others used phony email addresses like “test@testyomama.com” and fake phone numbers such as “7777777777” and “123 456 7897.”

You’d think a big, sophisticated software company like Oracle might have been able to write some code that would sniff out “7777777777” as a suspicious phone number or “test@testyomama.com” as a fake email address. Weak security doesn’t exonerate theft, of course, but it does seem to undercut Oracle’s claims about the value of the contents of the site.

As for the alleged SAP TN trespassers – jeez, guys, couldn’t you have at least tried to cover your IP tracks a little bit? They were so blasé about getting into and copying the site, that you might almost think that such shenanigans are common in the enterprise software business. (One imagines that, as SAP prepares a response to the suit, it is rushing to comb through its own support site logs for any evidence of activity by Oracle employees.)

Finally, the vast quantity of patches, updates, and explanatory documents that were taken from the Oracle support site gives eloquent if unintended testimony to the enormous complexity of traditional enterprise software – and goes a long way toward explaining why so many companies have been eager to explore alternatives like open-source programs and the delivery of applications as services over the net. Maybe some day software support sites will be simpler affairs representing much less economic value – and hence providing a much less tempting target for pilfering.

UPDATE: Michael Hickins, of Internet News, examines some of the legal ramifications of the suit:

Eric Goldman, director of the High Tech Law Institute at the Santa Clara University School of Law, said that if allegations in the complaint are true, “then SAP is in a world of trouble.” According to Goldman, by law, each instance of copyright infringement costs the guilty party $150,000; Oracle has claimed that there are 10,000 such instances, but even if it can only prove 500 instances, that still amounts to $75 million. And that’s only the beginning. If found guilty, SAP would not only have to disgorge all of its “ill-gotten gains,” but reimburse Oracle for its losses. Given the extent of these claims, if true, the U.S. Department of Justice could step in as well and begin criminal proceedings.

I’m not a lawyer, nor do I play one on the Internet, so I have no way to evaluate Goldman’s assessment, but it does suggest that the suit’s stakes may be high.

Are CIOs “dead weight”?

In my commentary on the latest Financial Times Digital Business podcast, I look at Chris Anderson’s charge that chief information officers are turning into “dead weight.” In case you missed it, Anderson had a provocative post on his blog late last month titled “Who Needs a CIO?” He’d given a speech at a CIO Magazine conference and came away from the event disillusioned:

You might have expected, as I had, that most Chief Information Officers wanted to know about the latest trends in technology so they could keep ahead of the curve. Nothing of the sort. CIOs, it turns out, are mostly business people who have been given the thankless job of keeping the lights on, IT wise. And the best way to ensure that they stay on is to change as little as possible. That puts many CIOs in the position of not being the technology innovator in their company, but rather the dead weight keeping the real technology innovators – employees who want to use the tools increasingly available on the wide-open Web to help them do their jobs better – from taking matters into their own hands.

Anderson continued:

… many CIOs are now just one step above Building Maintenance. They have the unpleasant job of mopping up data spills when they happen, along with enforcing draconian data retention policies sent down from the legal department. They respond to trouble tickets and disable user permissions. They practice saying “No”, not “What if…”

Christopher Koch, the executive editor of CIO Magazine, took umbrage at Anderson’s missile-like missive. On his own blog, he wrote:

Wow, did Chris Anderson, editor of Wired magazine, get some bad shrimp at the buffet when he spoke at our CIO conference a few months ago? [Anderson’s] premise for this post – that CIOs are business people exiled to the wasteland of IT – is completely without basis. Of the more than 500 CIOs we survey every year for our State of the CIO Survey, 80 percent have a technology background, not a business background – and that number has remained consistent since we started doing the survey in 2002. If there is a problem for CIOs these days, it is that their technology background gives business people the perception that CIOs are incapable of coming up with ways that IT can benefit the business … I would also argue that part of IT’s resistance to Web 2.0 can be traced to the fact that it isn’t really Web 2.0 at all. It’s Web 1.1. There are no FUNDAMENTALLY new ways of connecting people or exchanging value here, which makes a lot of it seem redundant to a CIO charged with maintaining application integrity, security and network performance.

There are a couple of different skirmishes going on here – over the identity of CIOs as well as over the value of new Web technologies – but, as I note in the FT commentary (pardon the self-quote), “what’s most interesting is that, once you peel back their rhetorical differences, you find that [Anderson and Koch] are largely in agreement. They both believe that most CIOs serve mainly a control function rather than one of innovation.” That’s a big change from the prevailing view about the direction of the CIO job at the dawn of this decade, when it was commonly assumed that the IT department would become the locus of not just IT innovation but business innovation in general.

But is “keeping the lights on” really so bad? One actual CIO, in a comment on Koch’s post, rose to the defense of the control role:

Keeping the lights on is important. Every morning, 1,500 people log in to our network and they expect their apps to work. Making sure their data is protected and that they have access to it 99.999% of the time is mission-critical to us … Our job is to find ways to use technology to advance the goals of the enterprise, not to find excuses to implement things because they’re new, cool, or will look good on our resumes.

It’s a fair point – running a tight IT ship is no easy accomplishment, particularly in a large organization – but I have no doubt that it’s not the last word in the seemingly endless debate about the role of the CIO. Of all “C-level” positions, the CIO post remains the least well defined and the most prone to identity crises. That’s probably a reflection of a deeper tension – the tension between the myth of business IT and the somewhat more pedestrian reality.