Invite bonanza: Pownce, Freebase, iMedix

UPDATE: Pownce invites are gone.

Somehow or other, I have managed to assemble a small pile of invitations for joining the communities of Pownce, Freebase, and iMedix. Because I consider everyone who visits Rough Type to be my friend, in the Web 2.0 sense of that term, I’m going to give them away, first come, first served. To request one, send an email to:

iloveroughtype @ mac . com

and put the name of the desired site in the subject field (one site per entrant). If you win, you’ll receive your invitation directly from the site. When the invites have all been given away, the above email address will be terminated and I will put a notice on this page.

Good luck, my friends.

Skype and the hedge fund problem

Hedge funds work on a simple wisdom-of-the-crowd principle: Because they involve an extremely large number of transactions, which smooths out the vagaries of individual transactions, the movements of financial markets follow predictable patterns, which can be discerned from a study of their past behavior. Deviations from the patterns tend to be shortlived, and by making huge bets that the deviations will quickly return to the norm, you can make a whole lot of money. As we’ve seen recently, though, things aren’t quite as simple as the hedge fund operators assume. Sometimes, very weird things happen and the deviations become either larger or longer-lived than expected, at which point the big bets can unravel in very unpleasant ways.

Peer-to-peer networks, which also involve lots and lots of different actors doing lots and lots of different things for lots and lots of different reasons, work in a similar way, and a company like Skype, whose telephone network is designed to run on many thousands of computers spread across a big P2P network, has built its system on the assumption that usage patterns are predictable – even if the actions of any individual user are not. Last week, Skype ran head-on into the hedge fund problem. The network’s behavior deviated from the norm in a way that was greater than the Skype engineers had planned for, and the system crashed. The catalyst was the distribution of a routine patch from Microsoft, which led to a cascade of unanticipated effects, as Skype’s Villu Arak explains:

The Microsoft Update patches were merely a catalyst — a trigger — for a series of events that led to the disruption of Skype, not the root cause of it … The high number of post-update reboots affected Skype’s network resources. This caused a flood of log-in requests, which, combined with the lack of peer-to-peer network resources at the time, prompted a chain reaction that had a critical impact. The self-healing mechanisms of the P2P network upon which Skype’s software runs have worked well in the past … Unfortunately, this time, for the first time, Skype was unable to rise to the challenge and the reasons for this were exceptional. In this instance, the day’s Skype traffic patterns, combined with the large number of reboots, revealed a previously unseen fault in the P2P network resource allocation algorithm Skype used. Consequently, the P2P network’s self-healing function didn’t work quickly enough. Skype’s peer-to-peer core was not properly tuned to cope with the load and core size changes that occurred on August 16.

As our economy becomes ever more tightly and intricately networked, its continued operation will hinge on the assumptions that mathematicians and software engineers embed in the code that underpins it. Usually, the assumptions will hold. But usually isn’t always. Weird things happen, even in the largest of crowds.

Long player: bonus track

A while back, in the post Long player, I disputed David Weinberger’s contention, in his book Everything Is Miscellaneous, that the vinyl record album was a purely economic contrivance and that we purchased and listened to albums not “for artistic reasons,” as we had assumed, but only “because the economics of the physical world required it: Bundling songs into long-playing albums lowered the production, marketing, and distribution costs because there were fewer records to make, ship, shelve, categorize, alphabetize, and inventory.” The form of the album was actually created, I argued, to expand both the artistic canvas and the supply of recorded music, and, indeed, its arrival unleashed a remarkable flood of creativity in popular music while also vastly expanding the supply of recordings, to everyone’s benefit.

In recently rereading Marshall McLuhan’s classic Understanding Media – insanely brilliant, with an equal emphasis on both words – I came across a brief passage in which McLuhan describes how the LP album spurred a burst of creativity in jazz as well as pop:

… the l.p. record suddenly made the phonograph a means of access to all the music and speech of the world … With regard to jazz, l.p. brought many changes, such as the cult of “real cool drool,” because the greatly increased length of a single side of a disk meant that the jazz band could really have a long and casual chat among its instruments. The repertory of the 1920s was revived and given new depth and complexity by this new means.

McLuhan’s book was published in 1964, a couple of years before rock musicians would realize that the LP form allowed them a way to extend their creativity beyond the individual track. Well before what we now recognize as the golden age of the album, the LP was viewed as a liberating technology, for musician and listener alike, not as a means of constraining choice and oppressing music fans.

The end of ERP?

As the founder and leader of PeopleSoft, Dave Duffield played a seminal role in establishing enterprise resource planning, or ERP, systems as the IT engines of big business. But then, in a hostile takeover, the enterprise software giant Oracle yanked PeopleSoft out of Duffield’s hands. Now, Duffield’s back in town, and he’s gunning for ERP.

It’s the Shootout at Enterprise Gulch.

Today, Duffield’s new company, Workday, is announcing an expansion of its suite of software-as-a-service business applications to include not only human resource management – its original offering – but also a set of financial management services, including accounts payable and receivable, general ledger, and reporting and analysis. The integrated suite, which is being offered in beta form and will be further fleshed out in coming months, provides, Duffield’s deputy Mark Nittler told me, “the first alternative to ERP.”

It’s an alternative to ERP, rather than a Web-delivered version of ERP, argues Nittler, because the system’s software guts are entirely different. Rather than being tightly tied to a complex relational database, with thousands of different data tables, running on a separate disk, the Workday system uses a much simpler in-memory database, running in RAM, and relies on metadata, or tags, to organize and integrate the data. Having an in-memory database means that the system can run much faster (crucial for Web-delivered software), and using metadata rather than static tables, says Nittler, gives users greater flexibility in tailoring the system to their particular needs. It solves ERP’s complexity problem – or at least it promises to. (For more on the nuts and bolts, see David Dobrin’s whitepaper and Dan Farber’s writeup.)

So what are the odds that Duffield’s Workday will come out on top once the dust has settled in Enterprise Gulch? The odds are long. But Workday has three things going for it. First, it has the widely admired Duffield, who gives the upstart immediate credibility with customers, investors, and programmers. Second, it has a technological head start. There are reasons to believe that the secret new system, codenamed A1S, being developed by SAP, the biggest ERP provider, will resemble what Workday is doing, with an in-memory database and much metadata, but SAP is moving slowly, weighed down with the baggage of the past. Third, Workday is adopting a strategy of patience and steady gains. It’s targeting mid-sized companies that have not yet implemented full ERP systems – a rich market that’s also being targeted by SAP, Oracle, and Microsoft, among other mainstream software houses. The ERP virgins, who well know the costs, complexities, and risks of installing an ERP system on their own hardware, have good reason to give careful consideration to a software-as-a-service offering like Workday’s, which runs in a browser and requires little in the way of upfront capital investments. The middle market offers Workday a means of establishing a toehold before it moves upward to the big-company market, where it will actually have to displace installed systems – a tall order, indeed.

Salesforce.com’s marketing slogan has long been “The End of Software.” Workday’s pitch sounds like “The End of ERP.” Whether or not Workday itself succeeds in its battle against the behemoths, we already see in its innovative system the outlines of the post-ERP era of enterprise computing.

Dell’s tainted soul

Last week, Dell let it be known that it would bestow on its former CEO, Kevin Rollins, a payment of nearly $50 million for unexercised stock options. That was on top of a golden-parachute severance payment of $5 million that Rollins received when he was ousted six months earlier. This week Dell revealed that, under Rollins’s watch, it engaged in a high-level conspiracy to defraud investors by faking its earnings reports.

In April 2006, Rollins gave a speech at the University of Texas at Austin, sponsored by a campus religious group, in which he touted his and his company’s high ethical standards. The college newspaper reported on the speech:

Dell Computers uses strict ethical standards and whistle-blowing procedures to ensure that its employees maintain the highest standard of integrity, said Dell’s president and CEO in a speech on campus Tuesday night.

Kevin Rollins, who took over as president and CEO of Dell in 2004, said the company has a set of guidelines called “The Soul of Dell” that tells employees how to act ethically. The rules seem very simplistic and easy, but they take on new meaning in light of the exposure of misconduct at Enron Corporation, he said.

Dell operates on a “one strike, and you’re out” policy, Rollins said. If an employee commits one breach of ethics, he or she is fired. They are held to a higher standard than simply following the law, he said.

“If you want to operate another way, leave, because we do not want to have that at our company. We do not want to be tainted,” Rollins said.

Talk is cheap. If Kevin Rollins wants to maintain his own integrity as well as that of his former company, he should return the $50 million to the Dell shareholders who were cheated while he was in charge.

VMware, Xen and the hardware business

Yesterday’s feeding frenzy for VMware’s freshly spawned stock may or may not prove to be rational, but, together with today’s news that Citrix is paying a hefty price to buy XenSource, what it clearly shows is that investors have taken notice of a sea change in the IT business: a great deal of money that would traditionally have gone into hardware is now going into software instead. VMware’s and Xen’s virtualization software allows companies to turn a single computer into two or more “virtual” computers, each of which can be doing a different job. Essentially, it allows computer owners to tap into the spare processing capacity of their machines – spare capacity that has traditionally been wasted and that appears to have increased steadily as Moore’s Law has pushed up the power of microprocessors.

Some argue that virtualization will ultimately spur greater purchases of servers and other hardware. In a recent blog post, for example, Sun Microsystems CEO Jonathan Schwartz wrote, “I’d like to go on record saying virtualization is good for the technology industry – which seems to be counterintuitive. The general fear is that technologies like Solaris 10 or VMware that help people squeeze more work from the systems they already own is somehow bad for Sun. In my view, quite the opposite is true. As I said, when we double the speed of our computers, people don’t buy half as many – they tend to buy twice as many.”

Schwartz’s analysis has a basic flaw – he conflates two things, processor speed and processor capacity utilization, that, while related, are different – but that doesn’t necessarily mean his conclusion is wrong. In fact, if you look only at the near term, he may well be right. Virtualization will likely spur purchases of servers – for the simple reason that when companies consolidate their machines they often upgrade their hardware at the same time (to get the full benefits of virtualization).

But what about the longer term? Here, it seems to me, the picture is very different. It’s becoming clear that, for large companies in particular, the consolidation of hardware can take place on a truly vast scale. Siemens Medical Solutions, for instance, used virtualization to increase the capacity utilization of its servers from 4% (yes, you read that right) to about 65%, enabling it to reduce the number of servers it runs by about 90%. Hewlett-Packard, itself a major server supplier, is engaged in a massive consolidation effort of its own, which it expects to reduce the number of servers it runs by nearly a third even as it increases its available computing power by 80%. These are not unusual results, and consolidation ratios will likely continue to expand as virtualization, server, and processor technologies advance. When you consider that we are only at the beginning of a major wave of consolidation that will enable big companies to operate with far fewer computers, it becomes difficult to imagine that we’ll end up with more servers in corporate data centers.

That doesn’t mean that there aren’t new applications of computing that will require substantial increases in processing power. I tend to agree with another Sun executive, CTO Greg Papadopoulos, that we’ll see an increasing demand for high-performance computing that will involve considerable investment in new hardware. But Papadopoulos also points out that the processing demands of mainstream business applications – the meat and potatoes of the enterprise IT market – are not keeping up with the expansion of computing supply delivered by Moore’s Law. In other words, the hardware requirements of most business applications are going down, not up – and consolidation opportunities will only grow.

Moreover, much of the investment in high-performance computing is going into building central, utility data centers – for software-as-a-service applications, computing grids, and storage depots – that themselves displace the need for businesses to buy additional hardware. When a company signs up for, say, Salesforce.com’s CRM service instead of a CRM application from Oracle or Microsoft, it avoids having to buy its own servers. Similarly, when a small business or a school decides to use Google’s Gmail platform to run its email, it can avoid having to buy and run its own email server.

In other words, the consolidation of servers and other gear is not only occurring at the level of the individual company, through the adoption of virtualization and other technologies for improving capacity utilization. Utility computing offers the opportunity for consolidation to occur as well across entire industries and, indeed, the entire economy. While the total demand for computer processing cycles seems certain to continue its inexorable rise, that no longer means that the overall demand for computers needs to rise with it. The hardware business is changing, and investors are wise to take notice.

ERP’s troubled legacy

Over the last two decades, companies have plowed many billions of dollars into enterprise resource planning (ERP) systems and the hardware required to run them, and the largest purveyors of the complex software packages, notably SAP and Oracle, continue to earn billions every year selling and maintaining the systems. But what, in the long run, will be the legacy of enterprise systems? Will ERP be viewed as it has been promoted by its marketers: as a milestone in business automation that allowed companies to integrate their previously fragmented information systems and simplify their data flows? Or will it be viewed as a stopgap that largely backfired by tangling companies in even more systems complexity and even higher IT costs?

In The Trouble with Enterprise Software, an article in new issue of the MIT Sloan Management Review, Cynthia Rettig deftly lays out the case for the latter view. Enterprise systems, argues Rettig, not only failed to deliver on their grand promise, but often simply aggravated the problems they were supposed to solve. “The triumphant vision many buy into is that enterprise software in large organizations is fully integrated and intelligently controls infinitely complex business processes while remaining flexible enough to adapt to changing business needs,” she writes. The reality is very different, says Rettig:

But these massive programs, with millions of lines of code, thousands of installation options and countless interrelated pieces, introduced new levels of complexity, often without eliminating the older systems (known as “legacy” systems) they were designed to replace. In addition, concurrent technological and business changes made closed ERP systems organized around products less than a perfect solution: Just as companies were undertaking multiyear ERP implementations, the Internet was evolving into a major new force, changing the way companies transacted business with their customers, suppliers and partners. At the same time, businesses were realizing that organizing their information around customers and services – and using newly available customer relationship management systems – was critical to their success.

The concept of a single monolithic system failed for many companies. Different divisions or facilities often made independent purchases, and other systems were inherited through mergers and acquisitions. Thus, many companies ended up having several instances of the same ERP systems or a variety of different ERP systems altogether, further complicating their IT landscape. In the end, ERP systems became just another subset of the legacy systems they were supposed to replace.

Given the high cost of the systems – around $15 million on average for a big company – it’s unsurprising, writes Rettig, that despite much study, researchers have yet to demonstrate that “the benefits of ERP implementations outweigh the costs and risks.” In fact, in a revealing twist, the mere ability to install an ERP system without suffering a major disaster or disruption has come to be viewed as a relative triumph: “It seems that ERPs, which had looked like the true path to revolutionary business process reengineering, introduced so many complex, difficult technical and business issues that just making it to the finish line with one’s shirt on was considered a win.”

Rettig’s conclusion is a dark one:

enterprise systems were supposed to streamline and simplify business processes. Instead, they have brought high risks, uncertainty and a deeply worrying level of complexity. Rather than agility they have produced rigidity and unexpected barriers to change, a veritable glut of information containing myriad hidden errors, and a cloud of questions regarding their overall benefits.

Rettig doesn’t see any quick fix on the horizon. Realizing the promise of a more modular and flexible service-oriented architecture (SOA), she argues, may take decades and will itself be fraught with peril. “The timeline itself for this kind of transformation may just be too long to be realistically sustainable and successful,” she writes. “And to the extent that these service-oriented architectures use subsets of code from within ERP and other enterprise systems, they do not escape the mire of complexity built over the past 15 years or so. Rather, they carry it along with them, incorporating code from existing applications into a fancy new remix. SOAs become additional layers of code superimposed on the existing layers.”

So what’s the solution? Rettig doesn’t offer one, beyond suggesting that top executives do more to educate themselves about the problem and to work more closely with their CIOs. That may be good advice, but it hardly addresses the underlying technical challenge. But Rettig nevertheless has provided a valuable service with her article. While some will argue that her indictment is at times overstated, she makes a compelling case that the traditional approach to corporate computing has become a dead end. We need to set a new course.

UPDATE: Harvard’s Andrew McAfee takes issue with Rettig’s article, arguing that managers have been rational in investing large amounts of cash in enterprise systems and pointing to a recent study which indicates that, for the customers of one ERP vendor at least, the successful installation of a system produces, on average, subsequent performance gains. McAfee’s critique, however, doesn’t address Rettig’s larger point, which concerns the effect of ERP’s complexity on companies’ choices going forward.