Tom Lord on ritual, knowledge and the web

Earlier today on this blog, Tom Lord offered what I found to be an especially illuminating comment on my post about Beau Friedlander’s article on the differences between the book and the web as conduits of information and ideas. For those who didn’t see Tom’s comment, I reprint it here in full. The first sentence refers to an earlier comment that had cited Jacob Bronowski’s “assert[ion] that Man is the only animal with ‘social evolution’ through language and stored memories in books.”

Language does a lot more than just “store knowledge.”

Language also has a very rich syntax compared to anything other animals have. Comparatively abstract and complicated messages (notice I did not say “ideas” or “knowledge”) can be conveyed.

Homo sapiens can thus (and do) exhibit more complex kinds of social behavior.

That capacity gave rise to “oral traditions”: ways to preserve (with drift) certain linguistic expressions over time, space, and individuals. Full blown writing systems extended that. Then presses. Of late, things like the Internet.

But, notice that I’m very careful to not talk about preserving “ideas” or “knowledge” because that’s only a part of what language does and there’s not even any a priori reason to think it’s a permanent part of what language does.

Language can also convey pure ritual, for example. By ritual, I mean “language games” that a person or group of people can “act out” – translate from just the remembered song or the big tome into some social practice in the real world, people really “acting out” the ritual with no understanding – no meaning beyond “here, we do the ritual.”

Now, consider a particular piece of writing. Could be a procedures manual for running a nuke plant or it could be a teacher’s manual for teaching “Huckleberry Finn” complete with instructions for testing the student’s “literary appreciation” with some multiple-choice and short-essay questions, could be a grocery list, or could be “War and Peace.”

Are those writings the sort that convey ideas and knowledge? Or the sort that convey pure ritual?

Each is both.

It happens in the real human world, all the time, that writing slips back and forth between conveying ideas / knowledge and conveying pure ritual. A school starts teaching Huck Finn by rote and winds up teaching only the ritual of passing the cliched quizzes, for example: knowledge lost, ritual dominant. Maybe new teaching staff notices and reminds everyone of the original intent of the quizzes – of the ideas behind them – leading to a change in practices. Knowledge “recovered” from ritual at the last moment.

It isn’t hard to imagine a society in which, at least for the bulk of the people, all writing becomes pure ritual with the only knowledge commonly held being the practice of ritual itself.

Such a society would first become a kind of “cargo-cult” parody of itself, seeming at first to continue operating more or less normally. For example, the nuke plant staff may steadily lose any sense of knowledge behind their procedures and yet, if the plant was well built and the procedures well designed, initially the rituals keep the plant running whether the people understand how or not.

A “cargo-cult” phase would give way, eventually, to a degenerate phase in which “things fall apart” but the knowledge of how they were supposed to work – the knowledge needed to design repairs – is gone. Oops. The nuke plant mysteriously exploded. Now what?

What of the case of Eliot’s antisemitism quoted in Nick’s piece? What is it, exactly? Is it neatly captured and “taught” by a few sentences in Wikipedia? Or by sampling a few sentences from various on-line theses? Something you can figure out almost instantaneously using Google?

If you think so, I say that that’s a slip from knowledge to ritual. Pavlov’s dog could understand as well: someone says “Eliot,” the good dog does a quick search and says “antisemite!”

Whatever was Eliot’s case it was a real, singular case in a real, specific historical context. Eliot’s case is, if nothing else, rich with detail. We don’t learn about Eliot’s case by hearing it ritualistically dubbed “antisemitic.” We learn about antisemitism in a particular historical period by, for example, examining Eliot’s case.

In the economics of scholarship – good scholarship – we tend to not forget that just saying “antisemitic” doesn’t in and of itself tell us much about Eliot. We tend to remember and remember how to explore that we learn about antisemitism in part by studying the details of Eliot’s case. The Google approach to “learning anything quickly” doesn’t convey scholarship – just quick and dirty call-and-response labels.

In the idealized and perfected economics of Google, people mostly sit around consuming and producing content through the enactment of rituals as encoded in the logic of web pages, indirectly controlling the flow of money and goods. Producers observe the people and compete for their purchases by giving them fractions of the purchase price through advertising. People buy on-line, extract some use-value, and resell on-line. The system is not much interested in preserving and conveying scholarship for scholarship cannot be conveyed “almost instantly” in a few well-selected search results.

The Enlightenment gets “defined” lots of different ways and I’m not much of one for definitive definitions but here’s one way to define it:

The Enlightenment is the convergence of a set of important ideas: the idea of individual freedom; the idea of rationality and of the limits and problems of rationality; the sense that an aware-of-the-problematics employment of rationality is not only compatible with but necessary to individual freedom; the sense that the social and economic order is what reproduces the Enlightenment across time and space and what can fail to reproduce it. (Thus, for example, it leads directly to the American Revolution.)

As we more and more intrusively let the Net redefine “friendship,” “reputation,” “freedom,” “collaboration,” and “knowledge,” we are turning our attention away from the real social order and we’re turning our backs on the Enlightenment entirely. We’re giving up all of that to play a video game, with Google, complete with Real Prizes. We’re picking ritual over ideas and knowledge.

-t

Cosmic implosion

A postscript to my Who killed the blogosphere? post:

starting Monday, Cosmic Variance will be bidding adieu to its life as a plucky independent blog, and huddle into the warm embrace of Discover Magazine … Now, we know what you’re thinking: you knew us back when we were indie rock, keeping it real, and now we’re going all corporate? Yes, yes we are. If for no other reason than the thankless task of keeping the blog from crashing and handling the technical end of things will be put in someone else’s capable hands, not our clueless ones. But there are other reasons. Hopefully the association with Discover will open up new opportunities, and bring new readers to our discussions. And we’re happy to be joining an elite community of blogs that are already up and running at Discover.

“Elite community”: now there’s a telling phrase.

Between a book and a web search

In a well-turned essay to be published in tomorrow’s Los Angeles Times, available immediately thanks to the miracle of digital type, Beau Friedlander, the editor-in-chief of Air America, looks into the “chasm between virtual texts and their printed counterparts.” He quotes Diane Ackerman on the blessings of the World Wide Web, which can make research a breeze:

While planning her most recent book, “The Zookeeper’s Wife,” author Diane Ackerman used the Internet “to know what animals the Warsaw Zoo kept, what animals called when, what they sounded like, smelled like, looked like and so on. ‘Gibbon calls,’ I thought. I Googled them, and heard their duets! I needed to know what birds would have been there, so I used the Internet to discover the aerial flyways over Europe in 1939. Previously, I would have made a trip to Cornell’s Lab of Ornithology, and spent hours there.”

But Ackerman also “did a lot of old-school research,” reports Friedlander. “‘I read a sea of books, interviews and testimonies – by and about people who witnessed the Holocaust – and I studied World War II history, armaments, cuisine, leaders, airplanes, medicine, architecture, fashion, music, films and such,’ she says. ‘Some of that I could find on the Internet, but not much; most of it meant reading books, some of which I had to have translated.'”

For all its convenience, Google’s snippet-view of information flattens knowledge, erasing context. Sometimes truth lies not in the needle but in the haystack. Writes Friedlander:

Books require a different sort of communion with one’s subject than the Internet. They foster a different sort of memory – more tactile, more participatory. I know more or less where, folio-wise, Eliot gets nasty about the Jews in his infamous 1933 lecture series “After Strange Gods,” but I always have to read around a bit to find the exact quote, and the time spent softens the bite of his anti-Semitism because the hateful remarks were made amid smart ones. For literary works, books are still, and most likely always will be, indispensable.

But technology has a curious way of making the indispensable dispensable. Markos Moulitsas Zuñiga, of the Daily Kos, tells Friedlander, “Google makes it possible to learn anything, near instantaneously. Like natural selection, there are species that adapt to the changing environment around them and thrive, and others die off.”

Except that nature has nothing to do with it. It’s what we call “progress,” a word that salves all wounds.

Zuckerberg’s Second Law

There’s something about the crisp autumn air that brings out the philosopher in Mark Zuckerberg. At this week’s Web 2.0 Summit, the Facebook founder mused, according to Saul Hansell of the New York Times, “I would expect that next year, people will share twice as much information as they share this year, and [the] next year, they will be sharing twice as much as they did the year before.”

Hansell dubs this Zuckerberg’s Law. But I believe it’s actually Zuckerberg’s Second Law. Zuckerberg’s First Law, enunciated on another fall day almost precisely one year ago, took this elemental form: “Once every hundred years media changes.”

Zuckerberg’s Second Law is certainly superior to Zuckerberg’s First Law, if only because it is not quite so obviously false. If you’re going to make up big laws, it’s always best to make them up about the future rather than the past.

And the Second Law has, as Hansell notes, a nice Gordon Moore kind of ring to it: “The amount of information we disclose about ourselves will, like the number of transistors on a slice of silicon, double every year.” I’ll buy that.

I’m troubled, though, by the implications of this exponential growth in our release of intimate data. I mean, aren’t we all pretty much tapped out already? Think forward a few years, and imagine the kind of details we’re all going to have to disgorge just to satisfy the demands of Zuckerberg’s Second Law. Shall no fart pass without a tweet?

Who killed the blogosphere?

Blogging seems to have entered its midlife crisis, with much existential gnashing-of-teeth about the state and fate of a literary form that once seemed new and fresh and now seems familiar and tired. And there’s good reason for the teeth-gnashing. While there continue to be many blogs, including a lot of very good ones, it seems to me that one would be hard pressed to make the case that there’s still a “blogosphere.” That vast, free-wheeling, and surprisingly intimate forum where individual writers shared their observations, thoughts, and arguments outside the bounds of the traditional media is gone. Almost all of the popular blogs today are commercial ventures with teams of writers, aggressive ad-sales operations, bloated sites, and strategies of self-linking. Some are good, some are boring, but to argue that they’re part of a “blogosphere” that is distinguishable from the “mainstream media” seems more and more like an act of nostalgia, if not self-delusion.

And that’s why there’s so much angst today among the blogging set. As The Economist observes in its new issue, “Blogging has entered the mainstream, which – as with every new medium in history – looks to its pioneers suspiciously like death.”

“Blogging” has always had two very different definitions, of course. One is technical: a simple system for managing and publishing content online, as offered through services such as WordPress, Movable Type, and Blogger. The other involves a distinctive style of writing: a personal diary, or “log,” of observations and links, unspooling in a near-real-time chronology. When we used to talk about blogging, the stress was on the style. Today, what blogs have in common is mainly just the underlying technology – the “publishing platform” – and that makes it difficult to talk meaningfully about a “blogosphere.”

Stylewise, little distinguishes today’s popular blogs from ordinary news sites. One good indicator is page bloat. The Register’s John Oates points today to a revealing study of the growing obesity of once slender blog pages. “Blog front pages are now large pages of images and scripts rather than the pared-down text pages of old,” he writes. The study, by Pingdom, is remarkable. Among the top 100 blogs, as listed by the blog search engine Technorati, the average “front page” (note, by the way, how the mainstream-media term is pushing aside the more personal “home page”) is nearly a megabyte, and three-quarters of the blogs have front pages larger than a half megabyte. The main culprits behind the bloat are image files, which have proliferated as blogs have adopted the look of traditional news sites. The top 100 blogs have, on average, a whopping 63 images on their front pages.

As blogs have become mainstream, they’ve lost much of their original personality. “Scroll down Technorati’s list of the top 100 blogs and you’ll find personal sites have been shoved aside by professional ones,” writes one corporate blogger, Valleywag’s Paul Boutin, in the new Wired. “Most are essentially online magazines: The Huffington Post. Engadget. TreeHugger. A stand-alone commentator can’t keep up with a team of pro writers cranking out up to 30 posts a day. When blogging was young, enthusiasts rode high, with posts quickly skyrocketing to the top of Google’s search results for any given topic, fueled by generous links from fellow bloggers … That phenomenon was part of what made blogging so exciting. No more.” The buzz has left blogging, says Boutin, and moved, at least for the time being, to Facebook and Twitter.

I was a latecomer to blogging, launching Rough Type in the spring of 2005. But even then, the feel of blogging was completely different than it is today. The top blogs were still largely written by individuals. They were quirky and informal. Such blogs still exist (and long may they thrive!), but as Boutin suggests, they’ve been pushed to the periphery.

It’s no surprise, then, that the vast majority of blogs have been abandoned. Technorati has identified 133 million blogs since it started indexing them in 2002. But at least 94 percent of them have gone dormant, the company reports in its most recent “state of the blogosphere” study. Only 7.4 million blogs had any postings in the last 120 days, and only 1.5 million had any postings in the last seven days. Now, as longtime blogger Tim Bray notes, 7.4 million and 1.5 million are still sizable numbers, but they’re a whole lot lower than we’ve been led to believe. “I find those numbers shockingly low,” writes Bray; “clearly, blogging isn’t as widespread as we thought.” Call it the Long Curtail: For the lion’s share of bloggers, the rewards just aren’t worth the effort.

Back in 2005, I argued that the closest historical precedent for blogging was amateur radio. The example has become, if anything, more salient since then. When “the wireless” was introduced to America around 1900, it set off a surge in amateur broadcasting, as hundreds of thousands of people took to the airwaves. “On every night after dinner,” wrote Francis Collins in the 1912 book Wireless Man, “the entire country becomes a vast whispering gallery.” As amateur broadcasting boomed, utopian rhetoric soared. Popular Science wrote, “The nerves of the whole world are, so to speak, being bound together, so that a touch in one country is transmitted instantly to a far-distant one.” The amateur broadcasters, the historian Susan J. Douglas has written, “claimed to be surrogates for ‘the people.'” The democratic “radiosphere,” as we might have called it today, “held a special place in the American imagination precisely because it married idealism and adventure with science.”

But it didn’t last. Radio soon came to be dominated by a relatively small number of media companies, with the most popular amateur operators being hired on as radio personalities. Social production was absorbed into corporate production. By the 1920s, radio had become “firmly embedded in a corporate grid,” writes Douglas. A lot of amateurs continued to pursue their hobby, quite happily, but they found themselves pushed to the periphery. “In the 1920s there was little mention of world peace or of anyone’s ability to track down a long-lost friend or relative halfway around the world. In fact, there were not many thousands of message senders, only a few … Thus, through radio, Americans would not transcend the present or circumvent corporate networks. In fact they would be more closely tied to both.”

That’s not to say that the amateur radio operators didn’t change the mainstream media. They did. And so, too, have bloggers. Allowing readers to post comments on stories has now, thanks to blogging, become commonplace throughout online publishing. But the once popular idea that blogs would prove to be an alternative to, or even a devastating attack on, corporate media has proven naive.

Who killed the blogosphere? No one did. Its death was natural, and foretold.

UPDATE: Justin Flood points to a difference between amateur radio and blogging: “It’s a fairly good statement to say that blogging in general will likely be more and more absorbed into the mainstream media, leaving independant bloggers a bit fewer and farther between. But unlike amateur radio, which has all but died today due to licensing and equipment costs, independant blogging will always be around. All one needs is a modicum of technical and writing knowledge and a website like Blogger or WordPress.com to host a blog for free.” I think there’s a lot of truth to that – it’s considerably easier, assuming you have a computer and net connection, to become a blogger than to become a ham radio operator, and that should, in theory, mean that a fairly steady stream of new bloggers should continue to enter the field (even if they don’t stay in it very long). Still, though, Flood exaggerates the death of amateur radio. There are about 3 million amateur ham radio operators worldwide. That doesn’t seem to be radically different from the number of active bloggers, despite the fact that blogging is new and sexy while hamming is, well, old and dusty.

UPDATE: A postscript.

The new economics of computing

Are we missing the point about cloud computing?

That question has been rattling around in my mind for the last few days, as the chatter about the role of the cloud in business IT has intensified. The discussion to date has largely had a retrospective cast, focusing on the costs and benefits of shifting existing IT functions and operations from in-house data centers into the cloud. How can the cloud absorb what we’re already doing? is the question that’s being asked, and answering it means grappling with such fraught issues as security, reliability, interoperability, and so forth. To be sure, this is an important discussion, but I fear it obscures a bigger and ultimately more interesting question: What does the cloud allow us to do that we couldn’t do before?

The history of computing has been a history of falling prices (and consequently expanding uses). But the arrival of cloud computing – which transforms computer processing, data storage, and software applications into utilities served up by central plants – marks a fundamental change in the economics of computing. It pushes down the price and expands the availability of computing in a way that effectively removes, or at least radically diminishes, capacity constraints on users. A PC suddenly becomes a terminal through which you can access and manipulate a mammoth computer that literally expands to meet your needs. What used to be hard or even impossible suddenly becomes easy.

My favorite example, which is about a year old now, is both simple and revealing. In late 2007, the New York Times faced a challenge. It wanted to make available over the web its entire archive of articles, 11 million in all, dating back to 1851. It had already scanned all the articles, producing a huge, four-terabyte pile of images in TIFF format. But because TIFFs are poorly suited to online distribution, and because a single article often comprised many TIFFs, the Times needed to translate that four-terabyte pile of TIFFs into more web-friendly PDF files. That’s not a particularly complicated computing chore, but it’s a large computing chore, requiring a whole lot of computer processing time.

Fortunately, a software programmer at the Times, Derek Gottfrid, had been playing around with Amazon Web Services for a number of months, and he realized that Amazon’s new computing utility, Elastic Compute Cloud (EC2), might offer a solution. Working alone, he uploaded the four terabytes of TIFF data into Amazon’s Simple Storage Service (S3) utility, and he hacked together some code for EC2 that would, as he later described in a blog post, “pull all the parts that make up an article out of S3, generate a PDF from them and store the PDF back in S3.” He then rented 100 virtual computers through EC2 and ran the data through them. In less than 24 hours, he had his 11 million PDFs, all stored neatly in S3 and ready to be served up to visitors to the Times site.

The total cost for the computing job? Gottfrid told me that the entire EC2 bill came to $240. (That’s 10 cents per computer-hour times 100 computers times 24 hours; there were no bandwidth charges since all the data transfers took place within Amazon’s system – from S3 to EC2 and back.)

If it wasn’t for the cloud, Gottfrid told me, the Times may well have abandoned the effort. Doing the conversion would have either taken a whole lot of time or a whole lot of money, and it would have been a big pain in the ass. With the cloud, though, it was fast, easy, and cheap, and it only required a single employee to pull it off. “The self-service nature of EC2 is incredibly powerful,” says Gottfrid. “It is often taken for granted but it is a real democratizing force in lowering the barriers.” Because the cloud makes hard things easy, using it, Gottfrid told Business Week’s Stephen Baker, “is highly addictive.” The Times has gone on to use S3 and EC2 for other chores, and, says Gottfrid, “I have ideas for countless more.”

The moral of this story, for IT types, is that they need to look at the cloud not just as an alternative means of doing what they’re already doing but as a whole new form of computing that provides, today, a means of doing things that couldn’t be done before or that at least weren’t practicable before. What happens when the capacity constraints on computing are lifted? What happens when employees can bypass corporate systems to perform large-scale computing tasks in the cloud for pennies? What happens when computer systems are built on the assumption that they will be broadly shared rather than used in isolation?

I think we will find that a whole lot happens, and it will go well beyond IT-as-usual. When electricity became a utility – cheap and ubiquitous – it didn’t just reduce the cost of running existing factory machines. As I describe in my book The Big Switch, it allowed a creative fellow like Henry Ford to build an electrified assembly line and change manufacturing forever. It’s natural to see a new technology through the lens of the technology it supplants, but that’s a blinkered view, and it can blind you to the future.

Openness is not enough

In the crowd at Microsoft’s cloud-computing coming out party earlier this week sat at least one Googler, and, as the Guardian’s Jack Schofield notes today, his observations about the event and its implications are worth reading. The guy in question, Dion Almaer, who works on Google Gears, among other things, writes on his personal blog: “I have had the pleasure to be at PDC this week and Microsoft put on a great show. As they showed their vision of unification around Windows (cloud, Web, PC, mobile) through great developer tools, there was excitement. Windows Azure looks great.” (The sound you just heard was Sergey Brin spitting his masala chai all over his MacBook Air.)

While emphasizing that he “remains curious about the details,” Almaer continues: “The ‘on premise’ feature [of Azure] looks particularly intriguing. If they can bridge the data center and the cloud, they have something quite compelling. Enterprises are struggling with the cloud in part. What do you put up there? How do you secure it? How do you tie back? Microsoft is going after that problem.”

He then goes on to discuss some of the competitive implications:

… even though we knew about [most of what Microsoft announced], I don’t know if we thought they were this far along. Microsoft is executing. This show set the stage “this is where we are going, and look how far we have come.” The Office on the Web demo showed that. Works in all browsers, with enhanced Silverlight support. Very nice indeed. What a wake up call to the rest of the Web? …

For those of us who worry about handing Microsoft control of the browser, plugins to other browsers, the cloud, the server model, and more…. I won’t lie to you. I am cautiously observing. Silverlight adoption worries me. We can’t fight Microsoft with “don’t choose them, remember what they did to you before?” Fear is lame. Instead, this is a wake up call to Adobe, Google, Yahoo!, Amazon, IBM, Sun, [insert other developer / platform players] to get kicking.

We can’t just be Open, we have to be better!

Precisely so. We can (and will) have debates about the relative openness of Azure and AWS and Force.com and all the other “cloud platforms” that are available or will be available. And those will be important debates. But in this early stage of the cloud’s development, openness means little to the buyer (or user). The buyers, particularly those in big companies, are nervous about the cloud even as they are becoming increasingly eager to reap the benefits the cloud can provide. What they care about right now is security, reliability, features, compatibility with their existing systems and applications, ease of adoption, stability of the vendor, and other practical concerns. In the long run, they may come to regret their lack of stress on openness, but in the here-and-now it’s just not a major consideration. They want stuff that works and won’t blow up in their faces.

In other words, and to echo Almaer, cloud customers are going to embrace what’s better, as they define “better” right now, not necessarily what’s more “open.” And this is one of the big questions that remains to be answered about Google and its ability to sell to big companies: Is it going to be able to see the world through the eyes of its potential customers, even if that view does not coincide with its own philosophy?