Category Archives: Best

MySpace’s vacancy

When an adult puts his ear to the door of youth culture, he inevitably mistakes the noise for the signal – and usually misses the signal altogether. So we have media blogger Scott Karp reeling back in horror from his visit to MySpace. It is, he tells us, “a DEEPLY DISTURBING place,” rife with “sexually suggestive or explicit content.” There’s even a hint of “murder” in the air. It is “humanity in the raw.”

Excuse me while I go sign up for an account.

What’s most fascinating about Karp’s post, though, is not his reaction to MySpace but his reaction to his reaction to MySpace. Having offered a moral critique – a visceral one – he suddenly goes all wobbly. “I’m not going to do a moral critique of MySpace or Web 2.0 or anything else — that’s not my gig,” he says. Then he says it again, with caps: “let me be repeat — this is NOT a moral critique. It’s a practical, business critique.” A wise retreat, I suppose. Moral critiques are so uncool. They’re the surest way to lose your web cred.

Still, I liked the outburst, the act of recoiling. It was real. The “practical, business critique” seems forced in comparison: “‘Social media’ may be all the rage, but ‘society’ functions best somewhere in between anarchy and fascism. Let it drift too far to one extreme, and things can get ugly. And when things get ugly, it’s hard to sell advertising.” That’s automatic writing, and when it’s not platitudinous it’s wrong. Ugly’s edgy, and edgy’s where advertisers want to be. Did Paris Hilton lose her endorsement deals when her naughty video leaked onto the web? Hell no. She got bigger and better ones.

A lot of bloggers hammered Karp for being an alarmist, for questioning the social-media orthodoxy. One went so far as to compare MySpace to a bicycle: Kids can get hurt on both, right? So what’s the big deal? Maybe I’m misremembering, but I think my old banana bike was a pretty wholesome toy, even with the mile-high wheelie bar. Riding it around the neighborhood with my friends was a way to get some exercise and fresh air, to see things in three dimensions, to escape “my space.” MySpace seems a little different.

Fred Wilson, a blogging venture capitalist, sees in MySpace the signs of a great emancipation:

We are at the dawn of the age of personalized media. The web has given the world a place where the audience is the publisher and what we are witnessing (and hopefully participating in) is the personalization of media. It will manifest itself in many strange and wonderful ways. And I am embracing it; for me, for my kids, and for the rest of my life.

I guess you see what you want to see. When I look around MySpace I don’t see much that’s “strange and wonderful” – or “deeply disturbing,” either. I wish I did. What I see is a dreary sameness, a vast assembly of interchangeable parts. Everything feels secondhand: the pimps-and-hos poses before the cameraphone, the ham-fisted, cliche-choked blog-prose. It’s sad to see so much effort put into self-expression with so little to express. Humanity in the raw? No, this is humanity boiled to blandness in the tin pot of personalization.

There was another blogger who responded to Scott Karp’s post by comparing the effect of MySpace to that of Elvis’s gyrating hips back in the fifties: The old folks didn’t get it then, and they don’t get it now. But MySpace isn’t anything like Elvis. It’s more like Jim Morrison limply exposing himself on stage in Miami in 1969: an enervated pantomime, force turned to farce.

I’ll tell you what scares me about MySpace. It’s not how dangerous it is, but how safe.

The editor and the crowd

Last weekend, two prominent technology bloggers, Dave Winer of the venerable Scripting News and Robert Scoble of the Microsoft-sponsored Scobleizer, expressed their frustration with Tech Memeorandum, a popular website that highlights the headlines of technology-related stories appearing in blogs, newspapers and other media. In Winer’s view, Memeorandum has turned into a tedious contest “with one blogger trying to top another for the most vacuous post.” Scoble, echoing Winer’s complaint, announced that he was going to avoid looking at Memeorandum “for at least a week” and instead rely on his self-selected RSS feeds to track technology news. Others have also been critical of Memeorandum, suggesting that its content is overly narrow or that it draws from too small a pool of sources.

But what exactly is being criticized here? How does Memeorandum choose what appears on its much-trafficked homepage? The answer is, it doesn’t choose – at least not in the way we typically think of “choosing.” Memeorandum doesn’t employ any editors to sift through the hundreds of technology stories that appear every day and select a handful to highlight. Rather, the site uses a software formula, or algorithm, to do the sifting and selecting. The exact nature of the algorithm, written by Memeorandum founder Gabe Rivera, remains confidential, but we know that it works by tapping into what’s come to be called “the wisdom of the crowd.” Like Google’s search algorithm, it tracks the actual choices people make while using the internet – what they look at, what links they follow, what links and words they choose to put into their own blogs or sites, and so on – and uses that information to calculate the crowd’s collective judgment about popularity, authority, timeliness and importance. Those calculations in turn determine which headlines appear on the Memeorandum homepage and the order in which they appear – just as similar sorts of calculations determine the results served up by Google, Yahoo and other search engines. The content of the Memeorandum homepage changes every five minutes, as the algorithm takes in more information and revises its calculations.

In a very real sense, the crowd takes the place of a human editor on a site like Memeorandum.

Collecting crumbs

Because it uses software to, in effect, model the mind of the crowd, Memeorandum is a good example of a second-generation internet company – a “Web 2.0” business, as they say. In his influential essay What Is Web 2.0?, Tim O’Reilly identifies “harnessing collective intelligence” as one of the tenets of Web 2.0. In fact, he says, it’s “the central principle behind the success of the giants born in the Web 1.0 era who have survived to lead the Web 2.0 era.”

O’Reilly points out that the “intelligence” of the internet’s users is naturally (and automatically) embedded in the web’s hyperlinked structure, or architecture:

Hyperlinking is the foundation of the web. As users add new content, and new sites, it is bound in to the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the web of connections grows organically as an output of the collective activity of all web users.

What Memeorandum and companies like it do is use software to discern patterns in this “web of connections,” patterns that, they hope, can be turned into useful or otherwise desirable online products and services. They mine the wisdom of the crowd, and then sell that wisdom back to the individual members of the crowd (either directly or, more typically, indirectly through advertising). As O’Reilly takes pains to note, this process has less to do with encouraging the active, conscious participation of web users in creating content than with simply following behind users, collecting the crumbs of “intelligence” that they leave behind as they journey through the web’s, or a single site’s, hyperlinked architecture:

One of the key lessons of the Web 2.0 era is this: Users add value. But only a small percentage of users will go to the trouble of adding value to your application via explicit means. Therefore, Web 2.0 companies set inclusive defaults for aggregating user data and building value as a side-effect of ordinary use of the application … They build systems that get better the more people use them … The architecture of the internet, and the World Wide Web, as well as of open source software projects like Linux, Apache, and Perl, is such that users pursuing their own “selfish” interests build collective value as an automatic byproduct.

In his book on Google, The Search, John Battelle makes a similar point using different terms. He says that the internet contains a “database of intentions.” Every search we make, every link we click, every word we write, every moment we spend looking at a page – each is a little piece of data about ourselves that we leave behind. In combination, all the billions and billions of bits of data left by millions and millions of web users turn the internet into a great database not just of intentions – “of desires, needs, wants, and likes” – but also of “collective intelligence,” which companies like Memeorandum are free to mine with software and forge into new products and services.

Mild or spicy?

When people criticize Memeorandum, therefore, they are not really criticizing Memeorandum. They are criticizing the crowd and the crowd’s “wisdom.” After all, in good Web 2.0 fashion, it is the crowd that is “choosing” what appears – and what does not appear – on the Memeorandum homepage.

It’s useful at this point to take a closer look at that homepage, and to compare it with the homepage of a similar site that uses a much different, and more familiar, method of “choosing”: Slashdot. Founded in 1997 by Rob Malda, Slashdot provides “news for nerds”; it is a forum where software developers, information technology professionals and other “geeks” can discuss various topics of interest to them. Slashdot has developed sophisticated software to mediate those discussions. But decisions about which stories to highlight on Slashdot’s most valuable real estate, its homepage, are not made by software algorithms. They’re made the old-fashioned way: by people. Usually, in fact, they’re made by just one person, Malda. He acts as Slashdot’s very human editor. “If you’ve been reading Slashdot,” Malda writes about the site’s editorial method, “you know what the subjects commonly are, but we might deviate occasionally. It’s just more fun that way. Variety Is The Spice Of Life and all that, right? We’ve been running Slashdot for a long time, and if we occasionally want to post something that someone doesn’t think is right for Slashdot, well, we’re the ones who get to make the call. It’s the mix of stories that makes Slashdot the fun place that it is.”

As I write this, there are 15 stories on Memeorandum’s home page. Twelve of them have to do with the development of internet-related products or services: new web sites, new features on existing web sites, new mobile services, new computing or communication devices or components. One is a list of the most memorable villains in video games. One is a list of “proverbs” for technology entrepreneurs. And one is about the invention of a new device to help paralyzed people communicate. If you visit Memeorandum regularly, you’ll recognize this as a fairly typical assortment. The great majority of the stories Memeorandum highlights tend to be about actual or rumored introductions of new web sites and services or computer or communications devices, supplemented by debates about blogging practices and passing controversies involving the media or internet technologies or companies. By any measure, the site presents a very , very narrow slice of the world of “technology.”

Three of the stories on Memeorandum are also among the 16 stories featured on the Slashdot home page right now. Slashdot also has six other stories on information technology topics, mainly involving software development or the operation of corporate IT departments. But nearly half of Slashdot’s stories don’t fit the mold. They range across a variety of technology and science subjects, and they’re often surprising: one’s on the use of bacteria to “eat” discarded styrofoam, one’s on a possible link between coffee consumption and heart attacks, one’s on evidence that human genes are still evolving, one’s on the discovery of a “hairy lobster,” one’s on nuclear power and climate change, and one’s on the breeding of “designer mice” for experiments. Although Slashdot’s target audience is far more limited than Memeorandum’s, its content is far more diverse. It’s also, to my eye, anyway, more engaging, interesting and, yes, fun.

One can see on Slashdot an active, interested, engaged mind at work – the mind of a skilled editor. In comparison, Memeorandum feels flat and wooden, like the output of a computer. Memeorandum is claustrophibic where Slashdot is expansive. Memeorandum is mired in the predictable while Slashdot revels in the unexpected. Memeorandum plays it safe; Slashdot takes chances.

I’m not trying to pick on Memeorandum – as I said, it’s a popular site that’s clearly delivering a valuable service to a lot of people. Its flaws are the same flaws you see on other sites that use algorithms to filter content. But I do think we can learn something important here, something about “the crowd” and “the editor” and their respective roles – and maybe, at least by implication, something about the evolution of media, too.

Mindfulness and mindlessness

As the comparison of Memeorandum and Slashdot shows, the software-mediated crowd is a poor replacement for a living, breathing, thinking editor. But there are other things that the crowd is quite good at. The crowd tends, for instance, to be much better than any of its members at predicting an uncertain future result that is influenced by many variables. That’s why stock market indexes beat individual money managers over the long run. It’s easy to understand why. First, there are limits to the ability of any single individual to understand the complexities in how a large number of variables change and influence one another over time. Second, every individual’s thinking is subject to idiosyncracies and biases – some conscious, some not. The crowd aggregates all individuals’ knowledge about variables while balancing out their personal biases and idiosyncracies. It’s not the “wisdom” of crowds that makes crowds useful, in other words; it’s their fundamental mindlessness. What crowds are good for is producing average results that are not subject to the biases and other quirks of human minds.

That’s also why search engines work pretty well with algorithms (until, at least, they begin to be gamed by individuals using their minds): They produce the result that best suits what the average searcher is looking for. You don’t want generally used search engines to reflect individual biases. Indeed, one of their main jobs is to filter out those biases – and revert to the average.

But that’s also why algorithms don’t work very well as editors. With an editor, you don’t want mindlessness; you want mindfulness. A good editor combines an understanding of what the audience wants with a healthy respect for the idiosyncracies of his own mind and the minds of others. A good editor doesn’t aim to provide a bland “average result”; he wants to wander widely around the average, at times even to strike out in the opposite direction altogether. The mindless crowd filters out personality along with idiosyncracy and bias. The mindful editor is all about personality. “It’s just more fun that way,” as Malda says.

Of course, such distinctions may not matter all that much in the future. Running an algorithm, after all, tends to be a lot cheaper than paying a staff of idiosyncratic editors, particularly when you’re trying to corral something with the vastness of the web. As Memeorandum’s popularity shows, moreover, an algorithm may often be “good enough.” For distracted people looking for a quick fix of information, a mindless average may be just what the doctor ordered. And if we can give that mindless average a sexy name like “collective intelligence,” it can start to look downright attractive. As we adapt to the internet, we may just learn to forget that an algorithm, no matter how elegantly conceived, is no substitute for a person, and that a crowd, no matter how full of “wisdom,” is no substitute for an editor.

But I hope we don’t.

The click economy

Last April, in one of my first posts on this blog, I wrote that “the eyeball strategy is back – with a vengeance.” I was responding to an announcement from Google that its revenues and profits had continued to skyrocket in last year’s first quarter. Now, the brokerage firm Piper Jaffray is predicting that Google’s stock will jump another 50 percent during 2006, surpassing $600 a share, and that it will be all blue skies for the company through the end of the decade. Writes analyst Safa Rashtchy: “In 2005, we estimate the paid search industry generated $10B globally, with Google capturing as much as 64% of that. In 2006, we expect the market to grow 41% with Google growing by more than 58% on a net revenue basis as the company capitalizes on its globally – strong brand and its high revenue-per-search. Over the next five years, we estimate the paid search industry will grow at a 37% CAGR to more than $33B in 2010, and we expect Google to capture the lion’s share of that revenue and grow faster than the market as a whole.”

Laissez les bons temps rouler! Still, though, I think I was wrong to use Google to illustrate the return of “eyeball monetization” as an attractive internet strategy. Google’s business isn’t really about monetizing eyeballs; it’s about monetizing clicks. That may seem like a small distinction – you have to attract the eyeball, after all, before you can spur the click – but I think there’s actually a very big difference. Eyeball monetization is the traditional media strategy: publish or broadcast content that attracts readers or viewers, and then intersperse ads among that content. The content, in this case, serves not to prompt action directly, but merely to draw an audience that’s attractive to companies looking to promote their products and services. There’s a natural distance, in other words, between the content and the ads – a distance that’s good for the content producer but often frustrating to the advertiser.

The click monetization strategy removes that distance. In Google’s AdSense program, for instance, a media company, or other content producer, earns nothing by simply attracting eyeballs. It only brings in cash by getting viewers to click on an ad link. The value of those clicks, moreover, varies enormously. Not all clicks are created equal. The economic incentive for the content producer therefore is not to produce content that simply engages a large or demographically attractive audience, but to produce content that (a) attracts an audience likely to click on a valuable advertising link and (b) increases the odds that such lucrative clicks will actually happen. Google talks a lot about the “relevance” of its ads, but relevance is a byproduct. Google is building an extraordinarily sophisticated machine for manipulating consumers – for increasing the odds that you or I will not just view or read but click. The most economically successful online content producers will be those that work within that system.

We’re still in the early stages of the growth of online media, and it’s not yet clear how the economics of click monetization will influence the production and distribution of content. Many content producers haven’t yet moved from the “eyeball” world to the “click” world. There are optimists who believe the web, and particularly the content-production technologies of Web 2.0, will erase the idea of “consumers” entirely; we’ll all become “producers,” and advertisers and marketers will serve us rather than manipulate us. But that looks increasingly like wishful thinking. Economics is destiny, and the economics of online media is all about manipulating consumers – in ways that couldn’t even have been dreamed of in the past.

Tribes of the internet

It’s only natural to think that a revolutionary communications technology like the internet will help break down barriers between people and bring the world closer together. But that’s not the only scenario, or even the most likely one. The internet turns everything, from knowledge-gathering to community-building, into a series of tiny transactions – clicks – that are simple in isolation yet extraordinarily complicated in the aggregate. Research shows that very small biases, when magnified through thousands or millions or billions of choices, can turn into profound schisms. There’s reason to believe, or at least to fear, that this effect, inherent in large networks, may end up turning the internet into a polarizing force rather than a unifying one.

In a 1971 article titled “Dynamic Models of Segregation,” Thomas Schelling, winner of the 2005 Nobel Prize for economics, offered a fascinating reappraisal of the segregation of communities along racial lines, illustrating the way biases are magnified through a kind of network effect. If asked what lies behind racial segregation, most of us would likely point to prejudice and discrimination. But Schelling, through a simple experiment, showed that extreme segregation may have a much more innocent cause. Mark Buchanan summarized Schelling’s findings in his 2002 book Nexus:

Schelling began by imagining a society in which most people truly wish to live in balanced and racially integrated communities, with just one minor stipulation: most people would prefer not to end up living in a neighborhood in which they would be in the extreme minority. A white man might have black friends and colleagues and might be happy to live in a predominantly black neighborhood. Just the same, he might prefer not to be one of the only white people living there. This attitude is hardly racist and may indeed be one that many people – black, white, Hispanic, Chinese, or what have you – share. People naturally enjoy living among others with similar tastes, backgrounds, and values.

Nevertheless, innocent individual preferences of this sort can have startling effects, as Schelling discovered by drawing a grid of squares on a piece of paper and playing an illuminating game. On his grid, he first placed at random an equal number of black and white pieces, to depict an integrated society of two races mingling uniformly. He then supposed that every piece would prefer not to live in a minority of less than, say, 30 percent. So, taking one piece at a time, Schelling checked to see if less than 30 percent of its neighbors were of the same color, and if this was the case, he let that piece migrate to the nearest open square. He then repeated this procedure over and over until finally no piece lived in a local minority of less than 30 percent. To his surprise, Schelling discovered that at this point the black and white pieces not only had become less uniformly mixed but also had come to live in entirely distinct enclaves. In other words, the slight preference of the individual to avoid an extreme minority has the paradoxical but inexorable effect of obliterating mixed communities altogether.

Buchanan sums up the lesson of Schelling’s experiment: “Social realities are fashioned not only by the desires of people but also by the action of blind and more or less mechanical forces – in this case forces that can amplify slight and seemingly harmless personal preferences into dramatic and troubling consequences.” (You can download a piece of Windows-only software to perform the Schelling experiment yourself.) In the real world, with its mortgages and schools and jobs and moving vans, the “mechanical forces” of segregation move fairly slowly; there are brakes on the speed with which we pull up stakes and change where we live. In internet communities, there are no such constraints. Making a community-defining decision is as simple as clicking on a link – adding a feed to your blog reader, say, or a friend to your social network. Given the presence of a slight bias to be connected to people similar to ourselves, the segregation effect would thus tend to happen much faster – and with even more extreme consequences – on the internet.

This is all theoretical, of course, but it’s easy to see how it follows logically from Schelling’s findings. And there is other evidence that the Internet may end up being a polarizing force. In a recent academic paper, called “Global Village or Cyber-Balkans? Modeling and Measuring the Integration of Electronic Communities,” Eric Brynjolfsson, of MIT, and Marshall Van Alstyne, of Boston University, describe the results of a model that measured how individuals’ online choices influence community affiliation. “Although the conventional wisdom has stressed the integrating effects of [internet] technology,” they write, in introducing their study, “we examine critically the claim that a global village is the inexorable result of increased connectivity and develop a suite of formal measures to address this question.”

They note that, because there are limits to how much information we can process and how many people we can communicate with (we have “bounded rationality,” to use the academic jargon), we naturally have to use filters to screen out ideas and contacts. On the internet, these filters are becoming ever more sophisticated, which means we can focus our attention – and our communities – ever more precisely. “Our analysis,” Brynjolfsson and Van Alstyne write, “suggests that automatic search tools and filters that route communications among people based on their views, reputations, past statements or personal characteristics are not necessarily benign in their effects.” Diversity in the physical world “can give way to virtual homogeneity as specialized communities coalesce across geographic boundaries.”

They stress that “balkanization” is not the only possible result of filtering. “On the other hand,” they write, “preferences for broader knowledge, or even randomized information, can also be indulged. In the presence of [information technology], a taste for diverse interaction leads to greater integration – underscoring how the technology serves mainly to amplify individual preferences. IT does not predetermine one outcome.” Nevertheless, they write that their model indicates, in an echo of Schelling’s findings, that “other factors being equal, all that is required to reduce integration in most cases is that preferred interactions are more focused than existing interactions.” If, in other words, we have even a small inclination to prefer like-minded views and people, we will tend toward creating balkanized online communities.

Such fragmentation of association tends to lead to an ever-greater polarization of thinking, which in turn can erode civic cohesiveness, as the authors explain:

With the customized access and search capabilities of IT, individuals can focus their attention on career interests, music and entertainment that already match their defined profiles, and they can arrange to read only news and analysis that align with their preferences. Individuals empowered to screen out material that does not conform to their existing preferences may form virtual cliques, insulate themselves from opposing points of view, and reinforce their biases. Authors of collaborative filtering technology have long recognized its ability to both foster tribalism as well as a global village.

Indulging these preferences can have the perverse effect of intensifying and hardening pre-existing biases. Thus people who oppose free trade are likely, after talking to one another, to oppose it more fiercely; people who fear gun control appear, after discussion, more likely to take action; and juries that want to send a message seem, after deliberation, to set higher damage awards. The reasons include information cascades and oversampled arguments. In one, an accumulating, and unchallenged, body of evidence leads members to adopt group views in lieu of their own. In the other, members of a limited argument pool are unwilling or unable to construct persuasive counterarguments that would lead to more balanced views. The effect is not merely a tendency for members to conform to the group average but a radicalization in which this average moves toward extremes.

Increasing the number of information sources available may worsen this effect, as may increasing the attention paid to these information sources … Internet users can seek out interactions with like-minded individuals who have similar values and thus become less likely to trust important decisions to people whose values differ from their own. This voluntary balkanization and the loss of shared experiences and values may be harmful to the structure of democratic societies as well as decentralized organizations.

It’s too early in the history of the internet to know whether this disturbing scenario will come to pass, a point that the authors emphasize. But we need only look at, say, the tendency toward extremism – and distrust of those holding opposing views – among the most popular political bloggers to get a sense of how balkanization and polarization can emerge in online communities. Brynjolfsson and Van Alstyne end on this note: “We can, and should, explicitly consider what we value as we shape the nature of our networks and infrastructure – with no illusions that a greater sense of community will inexorably result.” Personally, I’m even more fatalistic. I’m not sure we’ll be able to influence the progression of internet communities by tinkering with “our networks and infrastructure.” What will happen will happen. It’s written in our clicks.

Let Wikipedia be Wikipedia

In a blogospheric minute, Wikipedia has gone from Shining Example of All That’s Wonderful About the Web to Exhibit Number One for All That’s Wrong with the Web. The funny thing is, Wikipedia itself hasn’t changed at all. What it was as Hero it is as Goat. Now, I see that some eminent West Coast bloggers are talking about organizing a meeting to figure out how to fix Wikipedia. That would be a beautiful act of paternalistic condescension, but I wonder if what Wikipedia really needs right now is a bunch of well-meaning carpetbaggers talking mumbo-jumbo. Hell, that’s pretty much what got Wikipedia into this pickle in the first place.

Here’s my radical suggestion: Leave it to the Wikipedians.

Wikipedia ran into trouble because it assumed – or allowed itself, not unwillingly, to have thrust upon it – a mantle of “authority” that it neither needed nor deserved. It became a cause celebre of techno-romantics who saw it as a harbinger of an internet-enabled era of egalitarian media and universal creativity. The perception problem was exacerbated by the overweening rhetoric of Wikipedia founder Jimmy Wales, who let it be known that Wikipedia intended to become “the most authoritative source of information in the world” and that it should “market itself as an independent global resource … comparable to the Red Cross.” He may as well have stuck a “Kick Me” sign on Wikipedia’s ass.

Wikipedia is not an authoritative encyclopedia, and it should stop trying to be one. It’s a free-for-all, a rumble-tumble forum where interested people can get together in never-ending, circular conversations and debates about what things mean. Maybe those discussions will resolve themselves into something like the truth. Maybe they won’t. Who cares? As soon as you strip away the need to be like an encyclopedia and to be judged like an encyclopedia – as soon as you stop posing as an encyclopedia – you get your freedom back. You lose the need for complicated rules and restrictions and all sorts of tortured hand-wringing and navel-gazing. You don’t have to worry about critics because critics don’t have anything to criticize. Some facts are wrong? Hey, we never claimed they wouldn’t be. Someone created an entry about an imaginary being from Planet Xenat? So what were you expecting – an encyclopedia?

Dump the “authoritative” shtick. Kill the “Free Encyclopedia” tag line. Discourage the syndication of content by sites like and Tell the utopianists and the A-Listers to get stuffed. Stick a little disclaimer at the top of every page that says, “Wikipedia is not intended to be an authoritative reference work and should not be used as one. If you see an error or omission, feel free to fix it.” And then go at. See what happens. Leave encyclopedia editing to the encyclopedia editors. Be Wikipedians.

Have faith

Wired editor Chris Anderson offers a spirited defense of internet “systems” like Wikipedia, Google, and the blogosphere. Criticism of these systems, he argues, stems largely from our incapacity to comprehend their “alien logic.” Built on the mathematical laws of probability, they “are statistically optimized to excel over time and large numbers.” They sacrifice “perfection at the microscale for optimization at the macroscale.” Our “mammalian minds,” by contrast, are engineered not to apprehend the wonders of the vast, probabilistically determined whole but to focus on the quality of the individual pieces. We’re prisoners of the microscale: “We want to know whether an encyclopedia entry is right or wrong. We want to know that there’s a wise hand (ideally human) guiding Google’s results. We want to trust what we read.”

Google in particular, Chris writes, “seems both omniscient and inscrutable. It makes connections that you or I might not, because they emerge naturally from math on a scale we can’t comprehend. Google is arguably the first company to be born with the alien intelligence of the Web’s large-N statistics hard-wired into its DNA. That’s why it’s so successful, and so seemingly unstoppable.”

Maybe it’s just the Christmas season, but all this talk of omniscience and inscrutability and the insufficiency of our mammalian brains brings to mind the classic explanation for why God’s ways remain mysterious to mere mortals: “Man’s finite mind is incapable of comprehending the infinite mind of God.” Chris presents the web’s alien intelligence as something of a secular godhead, a higher power beyond human understanding. Noting that “the weave of statistical mechanics” is “the only logic that such really large systems understand,” he concludes on a prayerful note: “Perhaps someday we will, too.” In the meantime, we must have faith.

I confess: I’m an unbeliever. My mammalian mind remains mired in the earthly muck of doubt. It’s not that I think Chris is wrong about the workings of “probabilistic systems.” I’m sure he’s right. Where I have a problem is in his implicit trust that the optimization of the system, the achievement of the mathematical perfection of the macroscale, is something to be desired. To people, “optimization” is a neutral term. The optimization of a complex mathematical, or economic, system may make things better for us, or it may make things worse. It may improve society, or degrade it. We may not be able to apprehend the ends, but that doesn’t mean the ends are going to be good.

In a comment on Chris’s post, a fellow named Brock takes issue with the idea that Wikipedia is a probabilistic system. The value of Wikipedia, he says, lies not in the whole but in the individual entries, and the quality of those entries is determined not by statistics but by the work of individuals: “Wikipedia is wrong when a single person is wrong.” Chris counters that, even with Wikipedia, the whole matters: “The main point I was making about Wikipedia was not that any single entry is probabilistic, but that the *entire encylopedia* is probabilistic. Your odds of getting a substantive, up-to-date and accurate entry for any given subject are excellent on Wikipedia, even if every individual entry isn’t excellent.” He then provides a hypothetical illustration:

To put it another way, the quality range in Britannica goes from, say, 5 to 9, with an average of 7. Wikipedia goes from 0 to 10, with an average of, say, 5. But given that Wikipedia has ten times as many entries as Britannica, your chances of finding a reasonable entry on the topic you’re looking for are actually higher on Wikipedia. That doesn’t mean that any given entry will be better, only that the overall value of Wikipedia is higher than Britannica when you consider it from this statistical perspective.

OK, but what are the broader consequences? Might not this statistical optimization of “value” at the macroscale be a recipe for mediocrity at the microscale – the scale, it’s worth remembering, that defines our own individual lives and the culture that surrounds us? By providing a free, easily and universally accessible information source at an average quality level of 5, will Wikipedia slowly erode the economic incentives to produce an alternative source with a quality level of 9 or 8 or 7? Will blogging do the same for the dissemination of news? Does Google-surfing, in the end, make us smarter or dumber, broader or narrower? Can we really put our trust in an alien logic’s ability to create a world to our liking? Do we want to be optimized?

Over a virtual Bethlehem rises a virtual star, and in the manger we find Kevin Kelly’s Machine, conjuring thoughts beyond our ken. Is it Our Savior or a mathematically perfected Rough Beast?

Where am I?

I was flipping through the new Business Week when I came across this sentence: “For today’s wired youth, there is no distinction between virtual and physical reality.” Gosh. That must cause problems when they have to take a leak.