The new heavy metal

The centralization of computing into the information-processing equivalent of massive powerplants is ushering in a “new era” of “application-specific computing,” IBM technologist Bernard Meyerson tells John Markoff. IBM and Sun are today both introducing new high-end computers aimed at the emerging market.

The IBM machine, designed for online gaming applications, is a mainframe system based on the company’s Cell chip. The “gameframe,” as IBM calls it, is “a server system capable of permitting hundreds of thousands of computer users to interact in a three-dimensional simulated on-screen world” with unprecedented graphic quality, writes Markoff. The Sun machine, designed by Andreas Bechtolsheim, is a video server system that is “potentially powerful enough to transmit different standard video streams simultaneously to everyone watching TV in a city the size of New York.” It is geared to cable and telephone companies looking to supply interactive video programming with personalized ads. Both machines may sell for upwards of a million dollars.

Markoff notes that such specialized supercomputers will compete against grids of cheap, commodity processors, such as the vast grid that Google is building. Markoff notes a remarkable statistic: “Google’s computing prowess has now reached several million processors, according to one person with detailed knowledge of the system.” Wow. The highest estimate of the number of processors Google runs that I’ve seen up to now is 500,000. It appears that number is now hugely out of date.

The race to outfit the new data utilities is on. It’s Commodity Grid vs. Mainframe 2.0.

UPDATE: Raph Koster argues for a third way (for gaming, anyway), while 3pointD.com has more on IBM’s machine.

Users

In a column in today’s Guardian, I examine how the invention of virtual drugs last week may open up lucrative new opportunities for pharmaceutical firms able to create therapies for the psychological ailments that beset avatars.

Here’s a snippet: “Up to now, avatars have led fairly narrow lives. Their main pursuits have been limited to fighting ogres and dragons and having simulated sex using artificial genitalia. Virtual reality has been like a pornographic version of Middle Earth. Now, avatars have a third and more modern alternative: abusing substances. Fighting, screwing, and getting wasted: Virtual life is becoming more like real life every day.”

Here’s the rest of it.

Stabbing Polonius

Larry Sanger, the cofounder of Wikipedia and, more recently, the sole founder of Citizendium, another online volunteer-written encyclopedia, if at the moment one that remains curled tightly in a fetal position, has written an essay about “the new politics of knowledge” for the journal Edge. It is long, well-meaning, and unreadable. Here’s a taste of Sanger’s deathful prose:

Well, when we say that encyclopedias should state the truth, do we mean the truth itself, or what the best-informed people take to be the truth – or perhaps even what the general public takes to be the truth? I’d like to say “the truth itself,” but we can’t simply point to the truth in the way we can point to the North Star. Some philosophers, called pragmatists, have said there’s no such thing as “the truth itself,” and that we should just consider the truth to be whatever the experts opine in “the ideal limit of inquiry” (in the phrase of C. S. Peirce). While I am not a pragmatist in this philosophical sense, I do think that it is misleading to say simply that encyclopedias aim at the truth. We can’t just leave it at that. Unfortunately, statements do not wear little labels reading “True!” and “False!” We need a criterion of encyclopedic truth – a method whereby we can determine whether a statement in an encyclopedia is true.

It’s like fucking Polonius has come back to life. Get thee back behind the arras, pierced old fool, and badger us not with thine tedious pedantry!

OK, maybe that’s a little harsh. I wish Sanger well. Although I think Citizendium will flop – it’s too late to market and it comes wrapped in an ornate intellectual scaffolding that acts as a kind of force field against intruders (ie, contributors) – I would like to see it become popular for one simple reason: It would tend to dilute Wikipedia’s hegemony over Google search results, and that would be a small but good thing. Sanger’s article is a defense of his idea that if you gave “experts” some degree of control over Wikipedia’s contents – if you put them at “the head of the table” to watch over the kids – you’d end up with a better Wikipedia. That may well be true, but I sense that if Wikipedia is afflicted by what I’ve termed the cult of the amateur (Sanger calls it “dabblerism”), Citizendium may be afflicted by the cult of the expert. Both cults operate at approximately an equal distance from reality.

To be honest, I don’t see much difference between Sanger and his arch-nemesis and sometime collaborator Jimmy Wales. They’re true believers arguing over a technicality – always the bitterest kind of dispute – and Wales recently sidled toward Sanger’s camp when he came out in favor of introducing a more formal credentialism into Wikipedia’s already extraordinarily bureaucratic operation. (Wikipedia was once about outsiders; now it’s about insiders.) As Wikipedia shifts from pursuing quantity to pursuing “quality,” it is already heading in Sanger’s direction.

Whatever happens between Wikipedia and Citizendium, here’s what Wales and Sanger cannot be forgiven for: They have taken the encyclopedia out of the high school library, where it belongs, and turned it into some kind of totem of “human knowledge.” Who the hell goes to an encyclopedia looking for “truth,” anyway? You go to an encyclopedia when you can’t remember whether it was Cortez or Balboa who killed Montezuma or when you want to find out which countries border Turkey. What normal people want from an encyclopedia is not truth but accuracy. And figuring out whether something is accurate or not does not require thousands of words of epistemological hand-wringing. If it jibes with the facts, it’s accurate. If it doesn’t, it ain’t. One of the reasons Wikipedia so often gets a free pass is that it pretends it’s in the truth business rather than the accuracy business. That’s bullshit, but people seem to buy it.

Now that I’m warmed up, I have to say there’s another thing that gets my goat about Sanger, Wales, and all the other pixel-eyed apologists for the collective mediocritization of culture. They’re all in the business of proclaiming the dawn of a new, more perfect age of human cognition and understanding, made possible by the pulsing optical fibers of the internet. “I am optimistic,” Sanger recently said, with a face as straight as the theoretical line that runs the shortest possible distance between two points, “about humanity’s coming enlightenment.”

Truth! Knowledge! Enlightenment!

Enlightenment, of course, presupposes darkness: If we’re to be delivered into the light, then we must be mired in the murk of ignorance. So Sanger has to paint a fantastical picture of the past for his observations about the present and future to carry any weight. In his fantasy, “what we know” has through the ages been tightly controlled by all-powerful elites and doled out to us like so many spoonfuls of baby food:

In the Middle Ages, we were told what we knew by the Church; after the printing press and the Reformation, by state censors and the licensers of publishers; with the rise of liberalism in the 19th and 20th centuries, by publishers themselves, and later by broadcast media – in any case, by a small, elite group of professionals.

If this isn’t complete nonsense, it is such a ridiculous exaggeration that, for all practical purposes, it’s indistinguishable from complete nonsense. What’s most appalling is the way it presents “we” – by which I assume Sanger means the entirely imaginary claylike mass of undifferentiated beings that to him and others of his ilk represents mankind – as being dumb receptor valves entirely without imagination or a capacity for free thought. If from the Enlightenment to the present, “we” were spoonfed “what we know” by some central cabal of elitist gatekeepers bent on thought control, then why are we – or, more precisely, were we – so smart?

Take a look at your average educated citizen of, say, 1850 and compare the breadth of his knowledge with that of the average educated citizen of today (17 years after the invention of the glorious World Wide Web and six years after the blessed arrival of Wikipedia). I mean, really: there’s no comparison. If elites were tightly controlling “what we know” for the past few centuries, they were certainly doing a clumsy job of it. Are we to suppose that all the great thinkers of the past would have been really smart if only they could have surfed the web?

If an alien were to land on earth today and initiate a study of the relationship between the raw supply of information and the general level of knowledge of the populace, he would almost certainly come to the conclusion that the two are inversely correlated. I think that conclusion would be mistaken – there have to be other variables at work – but it nevertheless underscores the vast difference between getting information and getting knowledge. As Stephen Bertman has written, “Were all the great books of the Western world compressed onto a single silicon chip, the human race would be no wiser.” And were all those books, as well as every other stray strand of digitizable information, woven into what Kevin Kelly calls a “liquid fabric” of online content, linked, tagged, and annotated with a billion user comments, we would still be no wiser.

Sanger continues: “today, if you want to find out what ‘everybody knows,’ you aren’t limited to looking at what The New York Times and Encyclopedia Britannica are taking for granted. You can turn to online sources that reflect a far broader spectrum of opinion than that of the aforementioned ‘small, elite group of professionals.’ … I, at least, think it is wonderful that the power to declare what we all know is no longer exclusively in the hands of a professional elite.”

I swear to God, I have not yet met anyone on this planet, whether sharp as a tack or dumb as a rock, who, if he desires to find out what “everybody knows,” feels that he is limited by what the New York Times or the Encyclopedia Britannica “declares.” The time in my own life when I was most intensively interested in discovering “what we know” was probably when I was in my early twenties. I don’t recall ever looking at an encyclopedia during those years, and (for better or worse) I didn’t spend a lot of time reading newspapers. This was also before the arrival of the personal computer, so I never went online, either. Now maybe I’m misremembering, but I believe I always felt that I had access to a wealth of information about “what we know.” There were books, there were journals and magazines, there were libraries with shelves of reference works and, if you were really ambitious, cabinets of microfiche. There were smart people to talk to, there were woods to walk through, there were cities to explore. It was not at all difficult to find a spectrum of opinion every bit as broad as what you’ll find on the web today. Where was that professional elite that exclusively held the power to control what I knew about what we knew? I’ll tell you where it was: It was nonexistent.

Sure, a lot of people in this world face barriers, economic, political, and geographic, to getting access to information, but that’s hardly the fault of the New York Times or the Encyclopedia Britannica. And if you’re lucky enough not to face those barriers, then the getting of knowledge comes down not to the workings of either media elites or media collectives but to personal desire and initiative. If you have a hankering for knowledge and the will and discipline to pursue it, you will find the information you require, and its quantity need not be measured in terabytes. A little goes a long way. (Some have found a grain of sand sufficient.) If you lack a desire for knowledge, or the will and discipline to pursue it, you can be given all the information in the world and it will leave only the slightest and most delicate impression on your mind – the kind of impression typically left by, say, a Wikipedia article.

Yes, Wikipedia is the most extensive work of paraphrasing the world has ever seen – and, I admit, that’s a useful accomplishment and something its creators can be genuinely proud of – but, in the end, who really cares? It adds not a jot to the sum total of human knowledge. In fact, by presenting knowledge as a readymade commodity, a Happy Meal for Thinkers in a Hurry, it may well be doing more to retard creative thought than to spur it.

In a comment appended to Sanger’s essay, Jaron Lanier distills into four words the biggest problem with Wikipedia’s articles, and my guess is that the criticism will apply equally well to Citizendium’s: “The emphasis is random.” So true. Even when Wikipedia gets the facts right, the balance of those facts, a more subtle issue but one that’s equally important to accuracy, is often off. Small points get blown out of proportion – particularly those subject to debate – while big points get expressed poorly or glossed over. This is not a problem of expertise. It’s a problem of expression. In the end, Sanger’s barking up the wrong tree. The quality of an encyclopedia is not determined by the number of experts who sign up to contribute but by the skill of the writers and editors who translate what the experts know into the language of the lay reader. That’s a job that experts and crowds are both profoundly ill-suited for.

Open source and the programmer’s dilemma

A new article in IEEE Computer, “The Economic Motivation of Open Source Software: Stakeholder Perspectives,” sheds some interesting new light on an old question: Is open source software development good or bad for programmers?

Much has been written about what motivates software writers to contribute to open source. The motivations are, as you might expect, diverse, ranging from an ideological belief in free software to a desire to impress one’s peers to a desire to bolster one’s skills, reputation, and career prospects to receiving pay from a company in the open source business to having fun. As open source has become more popular, the motivation to participate has, naturally, become stronger. But knowing that an individual coder has rational incentives to contribute to open source development doesn’t tell you whether or not open source improves the economic standing of programmers in general. It doesn’t tell you whether the programming profession ends up with more or less money.

The author of the IEEE Computer article, Dirk Riehle, a researcher with SAP, doesn’t look at that question directly. Rather, he examines, in a theoretical way, how open source changes the economics of the IT markets in which programmers participate. He first looks at why big systems integrators and other “solutions” providers, like IBM, have been promoting open source. He argues that these companies, which sell bundles of products and services to their clients, like open source because it allows them to reduce the amount of money they have to pay to software vendors without requiring that they pass along the savings to customers in the form of lower prices. In other words, the software savings turn into additional services profits, which fall to the solutions providers’ bottom lines. Ultimately, that means that open-source software developers are subsidizing the big solution providers at their own expense. Writes Riehle: “If it were up to the system integrators, all software would be free (unless they had a major stake in a particular component). Then, all software license revenue would become services revenue.” (I would think it’s an overstatement to say that all software license revenue turns into services revenue; assuming there’s competition between solutions providers, some of the savings would go to the customers.)

Riehle also looks at the economic effect of open source on software markets themselves. He argues that, by tearing down the barriers to entry in software markets (by obviating the huge up-front investments required to create a proprietary program), open source spurs competition, which in turn reduces prices and erodes the profits of software vendors. Riehle writes: “Customers love this situation because prices are substantially lower than in the closed source situation. System integrators love the situation even more because they can squeeze out proprietary closed source software.” For the programmers themselves, however, much of the savings reaped by customers and added profits taken in by integrators comes out of their own pockets.

Riehle also notes that open source (because of its openness) tends to diffuse knowledge of particular programs among a much broader set of programmers. That will tend to increase competition among the programmers and hence depress their pay: “Technical skills around the open source product are a key part of determining an employee’s value to a [vendor]. Anyone who’s smart enough can develop these skills because the open source software is available to people outside the firm. Hiring and firing becomes easier because there’s a larger labor pool to draw from, and switching costs between employees are lower compared with the closed source situation. Given the natural imbalance between employers and employees, this aspect of open source is likely to increase competition for jobs and drive down salaries.”

If Riehle’s analysis is correct – and while his thinking is logical, he offers no hard proof of the economic effects he describes – then what we’re seeing playing out among coders is what I’ll term the Programmer’s Dilemma. Because skills in open source programming are increasingly necessary to enhance the potential career prospects of individual programmers, individual programmers have strong motivations to join in – and as more programmers join in, the incentive for each individual programmer to participate becomes ever stronger. At the same time, the total amount of money that goes to programmers falls as open source is adopted by more companies. Individual programmers, in other words, have selfish motives to engage in collectively destructive behavior.

A scandal looms for IT industry

More details are emerging about the U.S. government’s charge that IT vendors engaged in a sweeping kickback scheme aimed at influencing the provision of lucrative government contracts. The Department of Justice filed lawsuits last week alleging that Hewlett-Packard and Sun Microsystems had secretly paid millions of dollars to other vendors that sold their gear to government agencies and that Accenture had accepted millions of dollars of such kickbacks. In statements to Wired News, HP and Accenture denied any wrongdoing, while Sun said it “welcomes the opportunity to address the claims in a fair and impartial forum.”

In addition to the three companies named in the suits, the government lists other tech vendors who allegedly made or received inappropriate payments to influence government purchases. The list is a who’s who of the tech business: EMC, IBM, SAP, Microsoft, Oracle, Dell, BearingPoint, Capgemini Ernst & Young, PriceWaterhouse Coopers, EDS, PeopleSoft, Siebel Systems, NCR, Informatica, SAS, Hyperion, Ingram Micro, ACSIS, Verity, and many others.

Information Week provides a detailed rundown of the charges in an article today. Particularly troubling are the allegations of cozy dealings between hardware and software vendors and the consulting firms that are supposed to provide objective advice to the buyers of information technology:

While Sun and HP allegedly paid millions of dollars each year in kickbacks, Accenture allegedly accepted them in the form of “system integrator compensation,” rebates, and marketing assistance fees. The company earned all three from Sun and HP, according to the complaint. As a consultant for the government, Accenture was hired as an objective adviser in choosing vendors and purchasing IT equipment, software, and services. The government, however, says Accenture and its purchasing subsidiary, Proquire, were less concerned with their client, and more interested in profits and revenue from partners. “As a result, millions of dollars of kickbacks were sought, received, offered, and paid between and among the defendants with the alliances in violation of the False Claims Act and other federal statutes and regulations,” the complaint said.

Watch these suits closely. Should they go to trial, they threaten to open a Pandora’s Box for the trillion-dollar computing industry.

UPDATE: Business Week reports that the government is expected to enter lawsuits against additional vendors and underscores the stakes that may be involved:

The Justice Dept.’s involvement is significant. The original [whistleblower] cases were filed under the False Claims Act, which allows individuals to bring suit on behalf of the U.S. in return for up to one-third of any damages recovered. Most cases proceed in civil court without the federal government’s involvement. U.S. Attorneys often save their prosecutorial muscle for false-claims cases in which significant money is at stake or where evidence of wrongdoing is particularly strong. The U.S. joins only about 22% of false-claim cases but recovers about 98% of claims, according to Taxpayers Against Fraud, a watchdog group based in Washington.

In addition to the suits against Accenture, HP, and Sun that the government has already joined, whistleblower suits are proceeding against Cisco, EDS, SAP, Lockheed Martin, Oracle, AMS, CACI, SeeBeyond, Dell, and five other unnamed defendants, including, Business Week reports, an IBM subsidiary. The court has excused Boeing, Raytheon, Microsoft, SAIC, and Exostar from the suits.

SAP touts SaaS savings

As SAP prepares to unveil its new software-as-a-service offering – codenamed A1S – at its big Sapphire conference in Atlanta this week, it’s highlighting the big cost savings that on-demand software offers in contrast to traditional installed programs. SAP CEO Henning Kagermann, who once scoffed at the SaaS model, now seems well on his way to becoming a true believer. Although the A1S system, which is aimed at small and medium-sized businesses (the big growth market for enterprise software), comes in both installed and SaaS versions, Kagermann tells Business Week that the SaaS version can offer huge cost savings over the traditional version:

SAP plans to offer A1S in two flavors: as a software package midsize companies can install on their servers and customize to their needs, and as an online software suite that SAP will run on its own computers and deliver over the Internet for small companies with fewer options. The online software could cost half as much as the packaged version of A1S, says Kagermann. “That’s the only way to lower the cost of ownership by factors,” he says.

The shift toward providing software over the Net poses some big challenges for the German giant. First, it’s going to have to get into the data center business – and that will mean substantial new capital investments. As Business Week reports: “The investment will shave its expected 2007 operating margin by 1% to 2%, to around 27% of sales. That’s slightly lower than in 2006. ‘You have a lot of costs up front,’ says Kagermann. ‘It’s a different model.'”

Second, as the cost savings of the SaaS option become apparent, big customers will likely put pressure on SAP to reduce their prices as well:

Making matters trickier, SAP’s twin goals of reaching down to smaller customers and updating its traditional customers are somewhat at odds. “The midmarket is the last mining opportunity for enterprise software companies,” says [a Morgan Stanley analyst]. “The problem is once you go downstream, it can lead to pricing pressure at the top end,” as large companies demand discounts to close the gap between what they and smaller users pay.

Kagermann is still careful to avoid using the term “software as a service.” Like his counterparts at Microsoft, he prefers to say that the future of software will involve a “hybrid” model of both installed software and Internet services. He’s right – for the foreseeable future, anyway – but as long as the SaaS model offers dramatic savings to customers, it will be the model that shapes the economics of the business and the costs and profits of software firms. Sooner or later, Kagermann will have to call a spade a spade.

UPDATE: The way SAP is distinguishing between the SaaS and traditional versions of its software, as Kagermann’s comments above make clear, is through “options,” or “features.” The SaaS version is a lot cheaper, but the traditional, expensive version has more features. (This is the standard “versioning” practice in the software world, as Carl Shapiro and Hal Varian described in their excellent book Information Rules. The lack of features in cheaper versions is due less to technological constraints than to the vendor’s deliberate decision to hamstring the cheaper version.) Today, Dan Farber reports on an observation SAP executive vice president Uwe Hommer made this morning: “He said that customers typically use 30 percent of functionality of SAP solutions.” That kind of makes you wonder about the real business value of all those expensive “features.”

Go ask Alice’s avatar

Red Light Center, an avatarian sex site, expanded the bounds of virtual reality on Friday by introducing virtual dope. Members of the community can now, reports Simson Garfinkel of Technology Review, “enter a virtual rave and take virtual ecstasy, smoke a virtual joint, and even munch on some virtual mushrooms.”

Far out.

According to Brian Shuster, chief executive of Utherverse, the company that runs Red Light Center, the site is introducing the simulated abuseable substances as a kind of public service. Getting wasted virtually, he explains, will decrease people’s desire to get wasted in real life:

In a virtual environment, [peer] pressure shifts from trying actual drugs to experimenting with virtual drugs. Thus, users have a safe platform to explore the social aspects of drug use, without having to risk doing the actual drugs. By separating the social pressure from the real-world application, users have a totally revolutionary mechanism to deal with peer pressure, and actually to give in to peer pressure, without the negative consequences.

Moreover, users of virtual drugs have reported the effects of these virtual drugs to be surprisingly realistic and lifelike. To the extent that users can enjoy both the social benefits of virtual drugs as well as the entertainment associated with drug use, all with no actual drug consumption, the value of taking actual drugs is diminished.

Hmm. I bet if you were really, really high, that might actually make sense.

There’s a certain symmetry to the idea of virtual drugs. When the concept of virtual, or artificial, reality first emerged at the end of the sixties, it was tightly connected to the drug culture. The consciousness-expanding hallucinations that might be conjured up by computers weren’t so different from those that emanated from a tab of acid (or so it seemed at the time). Now that it’s possible to get stoned in cyberspace, we’ve kind of come full circle. I mean, think about it: When avatars hallucinate, they must see the real world.

Whoa. I’m freaking myself out.