Monthly Archives: June 2005

To hell with culture

Larry Ellison just put another nail into the coffin of software sentimentality. A mere six months after Oracle’s rancor-filled takeover of PeopleSoft, Ellison’s company yesterday delivered strikingly strong financial results, not just in its database business but in applications as well. The numbers undercut the popular notion that mergers of software firms are horribly difficult, if not inherently doomed.

Because the value of software makers lies in the creativity of their “human assets,” the old thinking went, you couldn’t apply tough management discipline in quickly consolidating two organizations and ripping out redundancy. Cultural friction would get in the way; sensitive knowledge-workers would walk. Here’s how IBM’s Joe Marasco put it last year: “The largest single reason for failure when two software companies combine is cultural incompatibility. Even if the two cultures are similar, merging them can be difficult for a vast variety of technical reasons. Plus, if the two companies are located some distance from one another, there is insularity because of the separation. Whatever the root cause, in the face of fundamental incompatibility, most software mergers fail, plain and simple.”

Ellison’s approach flew in the face of the conventional wisdom. As he tells BusinessWeek, he applied GE’s hard-nosed, to-hell-with-culture acquisition philosophy to combining the two big software houses: “We had a clear plan. A lot of things were done in 30 days, including integrating the two salesforces. The secret to these mergers is to make the hard decisions and move quickly. The problem with a lot of mergers in the tech industry is they’re not real mergers. People don’t eliminate duplication of effort. We wanted to get the economies immediately.” It’s the rip-mix-burn method, and it seems to have worked.

As we move to a more industrial approach to making business software (look at the Bangalore factories), we’re also moving to a more industrial approach to software company management. The software business is maturing, and sentimentalism about creativity is being squeezed out. Here’s Ellison again: “I think of this as the GE operational-excellence phase. Alfred Sloan was the consolidator in the auto industry. Ford had been the early winner, but General Motors got bigger. History repeats itself. It happened in railroads and cars. Now it’s happening in software. And there, we’re the consolidator. The magic in the software industry is called scale.”

Don’t get me wrong: There will always be an important place in software for entrepreneurship and innovation. They’re just not the forces driving the business anymore.

The economics of mischief

I’ve recently had the pleasure of adding a new step to my morning greet-the-world ritual: wake up, have coffee, take shower, brush teeth, purge trackback spam from blog. Trackbacks, for those on the periphery of the blogosphere, are links that are automatically added to a blog posting when another blogger makes a reference to it. Spammers are now using trackbacks to disseminate links to their own sites, which typically sell porn, Viagra or other tools for enhancing (or fabricating) intimacy.

Now, the hit rate for trackback spam must be incredibly low – much lower than, say, email spam or that other bane of the blogger’s existence, comment spam. I mean, how many of you dig into a posting’s trackbacks, and of those who do how many are dumb (or desperate) enough to be taken in by a blindingly obvious spam link? So the very existence of trackback spam underscores the incredibly low marginal costs of doing stuff online with software. If you can automate the distribution of spam, then the marginal costs of sending out an additional marketing message are basically zero. Economically, it’s worth it even if your hit rate is, oh, one-in-a-zillion. The cost of excising trackback spam, on the other hand, is very real. Either you turn off trackbacks, which cuts you off from their promotional value in attracting readers, or you delete the spam messages, one by one. (The spammers are always one step ahead of automated filters.) Basically, what this does is institutionalize mischief.

But the economically motivated mischief is only one of the Internet’s plagues. Another is mischief without any economic motivation – what might be called online juvenile delinquency, the virtual equivalent of draping a neighbor’s tree with toilet paper. The highest-profile recent example is the posting of dirty pictures on the Los Angeles Times’ experimental editorial-page wiki. As soon as a thread on the experiment went up on Slashdot, the mischief-makers swarmed, and the “wikitorial” was doomed. Now, admittedly, there was a certain pleasure in watching this. Wikis (God, how I hate the precious neologisms of the Webworld!) are an annoyingly overhyped and narcissistic phenomenon, and to have a major newspaper indulge in them seemed both pretentious and patronizing. But the mindless destructiveness is also tiresome. Toilet-papering a tree once is funny; making a habit of it is pathetic. Again, though, the Internet makes the costs of mischief-making so low, even if there’s no profit motive, that its proliferation becomes inevitable.

Do I have a solution? Nope. Computer networks, it’s safe to say, will have no measurable impact on human nature. Hand out free cans of spray paint beside a wall that can be seen by everyone in the world, and you’re going to get a hell of a lot of graffiti.

Welcome to the morass

“We hold that one who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, is liable for the resulting acts of infringement by third parties.” Thus writes Justice David Souter on behalf of a unanimous Supreme Court in the Grokster case. The ruling is being called “a sweeping victory for music recording companies and movie studios” that sets “the stage for a major legal assault on rampant file-sharing of copyrighted works by attacking the software designers.”

The ruling does seem frighteningly open-ended, with possible implications reaching beyond peer-to-peer software. What constitutes an “affirmative step to foster infringement”? What about, say, Apple’s famous “Rip. Mix. Burn.” ads promoting its computers’ iTunes software and CD burners? What about Google’s practice of scanning books into its database? Faced with the possibility of deep-pocketed media companies launching lawsuits hinging on interpretations of a company’s intentions, many entrepreneurs will no doubt be deterred from pursuing a whole lot of innovations.

Here’s a pdf of the court’s full ruling.

Other takes on the ruling: New York Times, Dan Gillmor, Doc Searls, Cory Doctorow, Tom’s Hardware Guide, Fred von Lohmann, RIAA.

Continuous partial nonsense

“The world is too much with us,” wrote William Wordsworth in 1807. Last week, at Kevin Werbach’s Supernova conference, ex-Microsoft exec Linda Stone voiced a similar thought a little more prosaically. Bombarded by digital stimuli, we exist in a condition of “continuous partial attention,” she said, according to a transcript posted by Nat Torkington. “With continuous partial attention, we keep the top-level item in focus and scan the periphery in case something more important emerges. Continuous partial attention is motivated by a desire not to miss opportunities. We want to ensure our place as a live node on the network; we feel alive when we’re connected.” Now there’s already a lot of hooey here, but if Stone’s saying that we live in a culture of distractedness, where input-processing takes precedence over contemplation, she’s making a valid point.

But her argument gets awfully thin when she tries to broaden it. She links the continuous-partial-attention idea to what she sees as “20-year cycles” in which people act either collectively or individualistically. From 1945 to 1965, we were in a collective mode, putting our faith in big institutions like the government. From 1965 to 1985, we shifted to the individualistic mode, striving for self-expression. From 1985 to 2005, we shifted back to the collective, desiring to be networked. This is much too tidy. These alleged shifts in consciousness were nowhere near as abrupt or universal as Stone implies. Moreover, they are hardly evidence of natural 20-year cycles. Instead, they were precipitated by external events, particularly the end of World War II, which gave people trust in institutions and set off a long period of economic growth, and the start of the Vietnam War, which would undermine trust in institutions and coincide with the start of an economic slowdown. And were, say, the late 50s really a period of collectivism while the late 60s were marked by individualism? A case could be made that it was exactly the opposite: the 50s were a time of a selfish concentration on one’s personal situation while the 60s saw a flowering of collective thinking and activism.

As for the 1985-2005 period, that also marked a relatively peaceful time of fairly strong economic growth. People felt generally complacent about the future. But to call it an era of collectivism again seems a stretch. Did we really start “reaching out for a network” around 1985 because we had grown “narcissistic and lonely,” as Stone argues? Does that mean we’re less narcissistic and lonely now then we were 20 years ago? Why then has this era made us feel “promiscuous” and “empty,” as Stone puts it? That seems like the outcome of an individualistic rather than a collective period.

Continuing with her 20-year-cycle theme, Stone argues that we’re now about to move back to a greater focus on individual fulfillment, in which people will seek to give their full attention to a small number of “meaningful connections” instead of giving cursory attention to myriad Blackberry hookups. “The next aphrodisiac is committed full-attention focus. In this new era, experiencing this engaged attention is to feel alive.” The popularity of the iPod, in this view, is “as much about personal space as personalized playlists.” This strikes me as pure hogwash. The iPod as a symbol of deepening attentiveness? Come on. Forget the 20-year cycles. What we’ve seen is a steady, continuing rise in the general level of distractedness as computing and communication technologies deliver an ever greater and faster-flowing stream of stimuli. And there’s no end in sight.

What Stone is really expressing – and why it seems to have struck such a chord among the technorati – is middle-age fatigue and angst. The first generation of the PC/Internet avatars is getting older and becoming disillusioned with what they’ve wrought; the Silicon Age, it turns out, is no Golden Age. They’re starting to yearn for something more. What’s sad is that Stone and her compatriots still cling to the belief that technology will fill the personal void. “Trusted filters, trusted protectors, trusted concierges, human or technical, removing distractions and managing boundaries, filtering signal from noise, enabling meaningful connections, that make us feel secure, are the opportunity for the next generation,” she says.

I’ll let Wordsworth answer:

Great God! I’d rather be

A Pagan suckled in a creed outworn;

So might I, standing on this pleasant lea,

Have glimpses that would make me less forlorn;

Have sight of Proteus rising from the sea;

Or hear old Triton blow his wreathed horn.

The people problem

Steve Andriole, a business professor at Villanova, argues in a Datamation column that labor shortages will increasingly push companies to outsource IT activities and, in time, shift to a utility model. Despite recent reports that “outsourcing does not save as much money as many people assumed,” he writes, “the number of management information systems (MIS), information systems (IS), computer science (CS) and computer engineering (CE) majors has fallen so dramatically over the past few years that we’re likely to lose an entire generation of replacement technologists if present trends continue – and they show every sign of doing so. So as the previous generation continues to gray, there will be precious few new ones to keep the skills pipeline full. The obvious outcome is increased demand for the skills – wherever they happen to be.”

Andriole also looks at my argument that we’re approaching the end of corporate computing, as companies will increasingly shift from owning their own IT assets to renting most of the IT resources they need. “Long-term,” he writes, “I think [Carr] is absolutely right. Initially, companies will purchase transaction processing services from centralized data centers managed by large technology providers, but over time companies will rent applications developed the old-fashioned way by the same old mega software vendors … Eventually, as SOA proliferates, new software delivery and support models will develop from the old vendors as well as a host of new ones…The appeal of ‘paying by the drink’ is just too great to resist – especially since the alternative will still (and forever) require the care and feeding of increasingly difficult-to-find technology professionals.”

Andriole’s observations are in tune with what I’m hearing from some of the early adopters of the utility model. The CIO of one mid-sized company that recently closed down its data center and shifted its operations to an applications hosting company told me that while the move saved the firm about 20% of its IT costs, the real motivation lay on the people side. Finding, recruiting, motivating and holding onto skilled IT staffers had just become too much of a hassle. Another CIO, of a European financial services firm, said that he believes the best IT talent will inevitably move to the vendor side, where they have good prospects for advancing their careers. On the user side, he said, IT people no longer have many options to move up.

Utility computing and the digital divide

What’s the best way to give poor people, particularly those in the Third World, access to the power of modern computing and communications? Some argue for developing and distributing dirt-cheap personal computers. But there may be a better way: giving people a personal “virtual desktop” that they can tap into through shared PCs.

It’s a similar model to the one that’s allowed telephone service to reach even the poorest and most remote populations. You can’t sell everyone a phone – most people in these places can’t afford one. Rather you bring one or two mobile phones into a village and rent them out call by call. Similarly, with existing utility-computing technologies, you can allow individuals to maintain their files and applications in a distant data center and tap into it through a shared PC or thin-client machine. Individuals can rent time on the shared PC for a little bit of money – and their data and apps will always be there, just as they left them.

An innovative little company named SimDesk has been pursuing this model, on a limited scale, for a while now. It provides users with storage space, computing power and a set of free applications that can be accessed over the Internet. One U.S. city, Houston, and one state, Indiana, already offer the service to their citizens. (Chicago is in the process of rolling it out.) If you live in these places, you can do sophisticated computing without having to spend hundreds of dollars to buy a PC, a bunch of programs and Internet access. You just go into a public library, sit at a terminal and log into what is, in effect, your own computer.

The utility model brought cheap electricity to the masses. Maybe it can do the same for computing.

Other takes on utility computing

News.com has a new article by Martin LaMonica that surveys some reactions, mainly negative, to my recent article on utility computing, “The End of Corporate Computing.” CIOs and tech execs point to various challenges that utility suppliers must (and I think will) address, including security and customization at the user end. Dan Farber also chimes in, saying that “those who fail to gear up culturally and technically for utility computing in the next few years will end up at a competitive disadvantage.”

LaMonica asked me for my thoughts on his piece and I emailed him the following response:

“What we don’t know is the ultimate shape of the IT utility model or the course of its development. That’s what makes it so interesting – and so dangerous to current suppliers. What we do know is that the current model of private IT supply, where every company has to build and maintain its own IT power plant, is profoundly inefficient, requiring massively redundant investments in hardware, software and labor. Centralizing IT supply provides much more attractive economics, and as the necessary technologies for utility computing continue their rapid advance, the utility model will also advance. Smaller companies that lack economies of scale in their internal IT operations are currently the early adopters of the utility model, as they were for electric utilities. My guess is that larger companies will as a first step set up their own internal IT utilities, consolidating their fragmented IT assets, and will begin shifting to the external supply model only when outside utilities have proven themselves and gained superior scale economies. Again, this follows the electricity pattern.

“There are certainly tough challenges ahead for utility suppliers. Probably the biggest is establishing ironclad security for each individual client’s data as hardware and software assets become shared. The security issue will require technological breakthroughs, and I have faith that the IT industry will achieve them, probably pretty quickly. The technical challenges are tough, but the industry has overcome bigger ones in the past.

“In the end, the pressure on companies to embrace the most economical supply model for any important resource becomes too great to resist.”

As I wrote in my article, the biggest barrier to the adoption of utility computing will be attitudinal rather than technical, and some of the people LaMonica quotes illustrate that point. I fear that they may, as Farber suggests, end up playing catch-up.