Monthly Archives: August 2007

Dell’s tainted soul

Last week, Dell let it be known that it would bestow on its former CEO, Kevin Rollins, a payment of nearly $50 million for unexercised stock options. That was on top of a golden-parachute severance payment of $5 million that Rollins received when he was ousted six months earlier. This week Dell revealed that, under Rollins’s watch, it engaged in a high-level conspiracy to defraud investors by faking its earnings reports.

In April 2006, Rollins gave a speech at the University of Texas at Austin, sponsored by a campus religious group, in which he touted his and his company’s high ethical standards. The college newspaper reported on the speech:

Dell Computers uses strict ethical standards and whistle-blowing procedures to ensure that its employees maintain the highest standard of integrity, said Dell’s president and CEO in a speech on campus Tuesday night.

Kevin Rollins, who took over as president and CEO of Dell in 2004, said the company has a set of guidelines called “The Soul of Dell” that tells employees how to act ethically. The rules seem very simplistic and easy, but they take on new meaning in light of the exposure of misconduct at Enron Corporation, he said.

Dell operates on a “one strike, and you’re out” policy, Rollins said. If an employee commits one breach of ethics, he or she is fired. They are held to a higher standard than simply following the law, he said.

“If you want to operate another way, leave, because we do not want to have that at our company. We do not want to be tainted,” Rollins said.

Talk is cheap. If Kevin Rollins wants to maintain his own integrity as well as that of his former company, he should return the $50 million to the Dell shareholders who were cheated while he was in charge.

VMware, Xen and the hardware business

Yesterday’s feeding frenzy for VMware’s freshly spawned stock may or may not prove to be rational, but, together with today’s news that Citrix is paying a hefty price to buy XenSource, what it clearly shows is that investors have taken notice of a sea change in the IT business: a great deal of money that would traditionally have gone into hardware is now going into software instead. VMware’s and Xen’s virtualization software allows companies to turn a single computer into two or more “virtual” computers, each of which can be doing a different job. Essentially, it allows computer owners to tap into the spare processing capacity of their machines – spare capacity that has traditionally been wasted and that appears to have increased steadily as Moore’s Law has pushed up the power of microprocessors.

Some argue that virtualization will ultimately spur greater purchases of servers and other hardware. In a recent blog post, for example, Sun Microsystems CEO Jonathan Schwartz wrote, “I’d like to go on record saying virtualization is good for the technology industry – which seems to be counterintuitive. The general fear is that technologies like Solaris 10 or VMware that help people squeeze more work from the systems they already own is somehow bad for Sun. In my view, quite the opposite is true. As I said, when we double the speed of our computers, people don’t buy half as many – they tend to buy twice as many.”

Schwartz’s analysis has a basic flaw – he conflates two things, processor speed and processor capacity utilization, that, while related, are different – but that doesn’t necessarily mean his conclusion is wrong. In fact, if you look only at the near term, he may well be right. Virtualization will likely spur purchases of servers – for the simple reason that when companies consolidate their machines they often upgrade their hardware at the same time (to get the full benefits of virtualization).

But what about the longer term? Here, it seems to me, the picture is very different. It’s becoming clear that, for large companies in particular, the consolidation of hardware can take place on a truly vast scale. Siemens Medical Solutions, for instance, used virtualization to increase the capacity utilization of its servers from 4% (yes, you read that right) to about 65%, enabling it to reduce the number of servers it runs by about 90%. Hewlett-Packard, itself a major server supplier, is engaged in a massive consolidation effort of its own, which it expects to reduce the number of servers it runs by nearly a third even as it increases its available computing power by 80%. These are not unusual results, and consolidation ratios will likely continue to expand as virtualization, server, and processor technologies advance. When you consider that we are only at the beginning of a major wave of consolidation that will enable big companies to operate with far fewer computers, it becomes difficult to imagine that we’ll end up with more servers in corporate data centers.

That doesn’t mean that there aren’t new applications of computing that will require substantial increases in processing power. I tend to agree with another Sun executive, CTO Greg Papadopoulos, that we’ll see an increasing demand for high-performance computing that will involve considerable investment in new hardware. But Papadopoulos also points out that the processing demands of mainstream business applications – the meat and potatoes of the enterprise IT market – are not keeping up with the expansion of computing supply delivered by Moore’s Law. In other words, the hardware requirements of most business applications are going down, not up – and consolidation opportunities will only grow.

Moreover, much of the investment in high-performance computing is going into building central, utility data centers – for software-as-a-service applications, computing grids, and storage depots – that themselves displace the need for businesses to buy additional hardware. When a company signs up for, say, Salesforce.com’s CRM service instead of a CRM application from Oracle or Microsoft, it avoids having to buy its own servers. Similarly, when a small business or a school decides to use Google’s Gmail platform to run its email, it can avoid having to buy and run its own email server.

In other words, the consolidation of servers and other gear is not only occurring at the level of the individual company, through the adoption of virtualization and other technologies for improving capacity utilization. Utility computing offers the opportunity for consolidation to occur as well across entire industries and, indeed, the entire economy. While the total demand for computer processing cycles seems certain to continue its inexorable rise, that no longer means that the overall demand for computers needs to rise with it. The hardware business is changing, and investors are wise to take notice.

ERP’s troubled legacy

Over the last two decades, companies have plowed many billions of dollars into enterprise resource planning (ERP) systems and the hardware required to run them, and the largest purveyors of the complex software packages, notably SAP and Oracle, continue to earn billions every year selling and maintaining the systems. But what, in the long run, will be the legacy of enterprise systems? Will ERP be viewed as it has been promoted by its marketers: as a milestone in business automation that allowed companies to integrate their previously fragmented information systems and simplify their data flows? Or will it be viewed as a stopgap that largely backfired by tangling companies in even more systems complexity and even higher IT costs?

In The Trouble with Enterprise Software, an article in new issue of the MIT Sloan Management Review, Cynthia Rettig deftly lays out the case for the latter view. Enterprise systems, argues Rettig, not only failed to deliver on their grand promise, but often simply aggravated the problems they were supposed to solve. “The triumphant vision many buy into is that enterprise software in large organizations is fully integrated and intelligently controls infinitely complex business processes while remaining flexible enough to adapt to changing business needs,” she writes. The reality is very different, says Rettig:

But these massive programs, with millions of lines of code, thousands of installation options and countless interrelated pieces, introduced new levels of complexity, often without eliminating the older systems (known as “legacy” systems) they were designed to replace. In addition, concurrent technological and business changes made closed ERP systems organized around products less than a perfect solution: Just as companies were undertaking multiyear ERP implementations, the Internet was evolving into a major new force, changing the way companies transacted business with their customers, suppliers and partners. At the same time, businesses were realizing that organizing their information around customers and services – and using newly available customer relationship management systems – was critical to their success.

The concept of a single monolithic system failed for many companies. Different divisions or facilities often made independent purchases, and other systems were inherited through mergers and acquisitions. Thus, many companies ended up having several instances of the same ERP systems or a variety of different ERP systems altogether, further complicating their IT landscape. In the end, ERP systems became just another subset of the legacy systems they were supposed to replace.

Given the high cost of the systems – around $15 million on average for a big company – it’s unsurprising, writes Rettig, that despite much study, researchers have yet to demonstrate that “the benefits of ERP implementations outweigh the costs and risks.” In fact, in a revealing twist, the mere ability to install an ERP system without suffering a major disaster or disruption has come to be viewed as a relative triumph: “It seems that ERPs, which had looked like the true path to revolutionary business process reengineering, introduced so many complex, difficult technical and business issues that just making it to the finish line with one’s shirt on was considered a win.”

Rettig’s conclusion is a dark one:

enterprise systems were supposed to streamline and simplify business processes. Instead, they have brought high risks, uncertainty and a deeply worrying level of complexity. Rather than agility they have produced rigidity and unexpected barriers to change, a veritable glut of information containing myriad hidden errors, and a cloud of questions regarding their overall benefits.

Rettig doesn’t see any quick fix on the horizon. Realizing the promise of a more modular and flexible service-oriented architecture (SOA), she argues, may take decades and will itself be fraught with peril. “The timeline itself for this kind of transformation may just be too long to be realistically sustainable and successful,” she writes. “And to the extent that these service-oriented architectures use subsets of code from within ERP and other enterprise systems, they do not escape the mire of complexity built over the past 15 years or so. Rather, they carry it along with them, incorporating code from existing applications into a fancy new remix. SOAs become additional layers of code superimposed on the existing layers.”

So what’s the solution? Rettig doesn’t offer one, beyond suggesting that top executives do more to educate themselves about the problem and to work more closely with their CIOs. That may be good advice, but it hardly addresses the underlying technical challenge. But Rettig nevertheless has provided a valuable service with her article. While some will argue that her indictment is at times overstated, she makes a compelling case that the traditional approach to corporate computing has become a dead end. We need to set a new course.

UPDATE: Harvard’s Andrew McAfee takes issue with Rettig’s article, arguing that managers have been rational in investing large amounts of cash in enterprise systems and pointing to a recent study which indicates that, for the customers of one ERP vendor at least, the successful installation of a system produces, on average, subsequent performance gains. McAfee’s critique, however, doesn’t address Rettig’s larger point, which concerns the effect of ERP’s complexity on companies’ choices going forward.

Edgeio vs. Freegeio

I sort of trashed Edgeio when it originally unveiled itself a year and a half ago. The company, founded by Keith Teare with some help from Mike Arrington and others, wanted to be a centralized clearinghouse for decentralized classified ads. Instead of posting your ad on Craigslist or listing your product on eBay, you’d put an ad on your own blog and, through the magic of RSS, it would automatically be aggregated with other people’s ads on the Edgeio site. What Edgeio was offering was an elegant solution to a problem that no one had.

Last week, Edgeio introduced a new service – a “distributed paid-content platform,” in the company’s non-memorable phrasing – that is altogether more interesting. In essence, Edgeio is providing a shopping-cart-in-a-widget that makes it easy to sell digital goods through a site. For instance, if I decided to sell the post you’re now reading for, say, $3.00, I could stick a little button right here saying, “To read this entire post, click here.” You’d click, a box would appear asking you to pay the three bucks, you’d pay the fee (right?), and then licketysplit the rest of the text would appear. You’d be a happy buyer, I’d be a happy seller, and Edgeio would also be happy because it would pocket a 20% cut of the sale price. If I wanted to sell an MP3 music file or podcast or a video stream or a pdf, I could do it in the same way.

OK, technically speaking, that’s nothing new. I could do something similar through PayPal. Except that Edgeio greatly simplifies the process, and as someone who once tried to figure out how to use PayPal to offer an in-site purchase of text, and failed miserably, I can tell you that simplification is a powerful business model.

Then again, who would pay $3 to read this? Answer: nobody (except maybe Keith Teare). But what if, instead of being a short post about Edgeio, this was a brilliant 5,000-word analysis of the future of the enterprise application market, and instead of asking three bucks for it, I gave it a price tag of $500? That, I think, is where the Edgeio service holds some promise. It’s not about mass-market micropayments; it’s about niche-market macropayments.

But where the Edgeio service gets really interesting, at least in theory, is that it builds in an affiliate program. What that means is that other people would also be able to sell this post (or a music file or a video stream or a pdf of that brilliant 5,000-word analysis) through their own sites, and they would earn a percentage of the sale price as set by me. To put it somewhat grandiosely, the Edgeio service automates the creation of a distribution network, at both the logistical and the contractual level.

But there’s one very important thing that the service lacks: the ability for sellers to aggregate diverse bits of content from various producers into a bundle and to sell the bundle rather than the individual pieces. Say, for instance, I’m a college professor who has developed an interesting new course that other professors might want to give. I could use an Edgeio-like service to collect the course readings into a single bundle that I could sell with a teaching plan. Or say I’m a master of the mix tape. I could create a bundle of songs and sell them through my site (rather than posting my most-excellent playlist on iTunes and letting Apple make all the money by selling the actual tunes). Or say I’m a post-modern magazine tycoon. People would pay me to assemble a nifty bundle of articles drawn from various sites, thus saving themselves a lot of time reading a lot of crap. When you let sellers bundle, you open the way for a lot more creativity in merchandising – and you temper some of the problems with the micropayment model.

I realize I’m letting my sentimental bias show: I still hope that there will be a way to actually sell stuff on the internet rather than having to give everything away for free, crassly plastered with ads. (Why? Because I think that the hegemony of “free” will in the long run end up narrowing our choices rather than expanding them.) Edgeio’s biggest competitor is Freegeio, and Freegeio will probably win. But, hey, it’s a nice try, and I hope Edgeio (a) adds a bundling capability to its service and (b) succeeds. And even if it doesn’t establish a context in which micropayments become attractive, the niche macropayments model may well work.

Now, aren’t you glad you didn’t have to shell out $3 to read this post?

Cheapskate.

Cheaper, better IT

From my article “Ten Tips for Reducing Burgeoning IT Costs” in the new issue of Director Magazine:

The good news is that in the wake of the Y2K scare and the bursting of the dotcom bubble, companies have grown more skeptical about IT and more conservative in their spending. Microsoft faces a much tougher sell [in pitching upgrades] this year than it did in 2001 when it rolled out Windows XP. Since then exciting new technologies have also emerged that have allowed businesses to use their existing IT equipment more effectively and avoid buying new gear. Suddenly, companies are finding they can cut their IT budgets and still have the computing capabilities they need. Smart IT management is all about getting more for less. Here are 10 ways your business can achieve that goal …

Read.

The automation of social life

William Davies has written a brief, important essay called The Cold, Cold Heart of Web 2.0 in The Register. He argues that it’s a mistake to assume that the technology-driven efficiencies we welcome in the commercial realm, as a means of reducing costs and, often, expanding choices, will also bring benefits when applied to the social or cultural realm. Society is not a market, and automation may harm it rather than enhance it.

“The first dotcom boom,” Davies writes, “was principally about putting the internet to work in increasing the efficiency of existing services.” It made activities like the purchase of books and the payment of taxes easier by automating some of their more time-consuming aspects. The main thrust of Web 1.0 was to streamline “one-to-many” services, which “feature an organisation that resembles a ‘producer’ offering something to individuals who resemble ‘consumers’, who usually have some choice about whether or not to accept it.”

Web 2.0, by contrast, “abandons this conventional one-to-many model of service provision, and sets about exploiting the many-to-many potential of the internet. Rather than using the web to connect producers to consumers, it is used to connect individuals to each other.” Computer networks have, of course, always supported many-to-many services, like bulletin boards and other social networks. What’s changed with Web 2.0, Davies writes:

is that these otherwise secluded and organic realms of social interaction are now the focus of obsessive technological innovation and commercial interest. The same technological zeal and business acumen that once was applied to improving the way we buy a book or pay our car tax is now being applied to the way we engage in social and cultural activities with others.

In short, efficiency gains are no longer being sought only in economic realms such as retail or public services, but are now being pursued in parts of our everyday lives where previously they hadn’t even been imagined. Web 2.0 promises to offer us ways of improving the processes by which we find new music, new friends, or new civic causes. The hassle of undesirable content or people is easier to cut out. We have become consumers of our own social and cultural lives.

The problem – and the danger – is that efficiency plays a very different role in the marketplace of products than it does in the realm of society and culture. “Undoubtedly there are instances where we do want our social lives to be more efficient,” write Davies. “But we should worry about this psychology seeping too far into our lives.” We do not, and should not, judge the quality of our social and cultural life by its efficiency. As Davies concludes:

The pursuit of maximum convenience in the cultural sphere risks dissolving what we value in it in the first place. Outside of the economy – and very often within the economy too – we find that the constraints and accidents of everyday life are the basis for enjoyable and meaningful activities. They don’t necessarily connect us to the people we most want to speak to or the music we most want to listen to. Sometimes they even frustrate us. But this shouldn’t lead to business process re-engineering.

In a recent blog post, the usually perceptive Clay Shirky writes, of my own work, “I have never understood Nick Carr’s objections to the cultural effects of the internet … when he talks about the effects of the net on business, he sounds more optimistic, even factoring in the wrenching transition, so why aren’t the cultural effects similar cause for optimism, even accepting the wrenching transition in those domains as well?” The real question, to me, is this: Why in the world would anyone believe that the cultural effects of the internet would be beneficial simply because the internet’s effects on business are beneficial? And yet Shirky is far from alone in making this bizarre association – it runs like a vein of fool’s gold through the writing of the Net’s rose-tinted-glasses set. They want to believe that the processes of culture-making and society-building can be automated and reengineered as if they were the processes of widget-manufacturing. As Davies eloquently explains, they’re wrong.

(This is a theme, by the way, that runs, less succinctly, through my forthcoming book The Big Switch: Our New Digital Destiny.)

UPDATE: Ian Douglas and Joshua Porter offer thoughtful rejoinders. I agree with Porter that I was mistaken to call efficiency “an intrinsic good” in markets; I have edited my original post to temper that point. I disagree, however, with Porter’s contention that there’s no difference between “markets” and “social lives.”

Growing up virtual

From my column in today’s Guardian:

Compared to Club Penguin, Second Life, the much-hyped virtual world aimed at adults, is something of a ghost town. It’s managed to attract only about 95,000 paid subscribers so far, a fraction of Club Penguin’s 700,000. In fact, all of the most popular virtual worlds are geared to kids and teenagers. The venerable Habbo Hotel, originally launched in Finland in 2000, attracts 7 million visitors a month, Sweden’s Stardoll attracts 5 million, Webkinz and Neopets attract 4 million each, and Gaia Online reports nearly 3 million monthly visitors … Clearly, there are big commercial rewards to be had by enticing children to spend a lot of time exploring virtual worlds. What’s less clear, though, is the long-term effect on the kids themselves.

Read.