A new chapter in the theory of messages

One of the goals of the software coder is parsimoniousness. Because every line, even every character, of code places a demand on the computer processor, the pruning of instructions to their essence makes for faster, more efficient programs and an optimized system. The art of the coder, like that of the aphorist, is one of compression.

Twitter, it has become clear, was “never about what you’re doing for breakfast,” as Steve Gillmor writes. It was about creating “the realtime universal message bus.” It was, in other words, about building an electronic conduit, a “bus,” through which the people on the network – the human nodes – can efficiently exchange what have come to be called “status updates.” The use of engineering terms to describe social relations is both apt and necessary. The social network is a computer network, a platform for programming in which man and machine enter a symbiotic, or cybernetic, relationship.

In Twitter messages, or tweets, the use of the “@” sign is a means of denoting a specific address on the computer network at which a human operative is stored. The human operative receives the realtime message, the instruction, and is activated, usually resulting in the issuance of another message. The 140-character limit on messages is a means of imposing parsimoniousness on a lay audience who, without the limit, might revert to their natural human loquaciousness and gum up the system. The realtime human-machine network is able, as a result, to operate with a high degree of efficiency, leading to an optimal deployment of cybernetic resources.

In his 1950 book The Human Use of Human Beings, cybernetics pioneer Norbert Wiener provided the context for the social networking systems that are becoming so popular today:

… society can only be understood through a study of the messages and the communication facilities which belong to it; and … in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever-increasing part.

When I give an order to a machine, the situation is not essentially different from that which arises when I give an order to a person. In other words, as far as my consciousness goes I am aware of the order that has gone out and of the signal of compliance that has come back. To me, personally, the fact that the signal in its intermediate stages has gone through a machine rather than through a person is irrelevant and does not in any case change my relation to the signal. Thus the theory of control in engineering, whether human or animal or mechanical, is a chapter in the theory of messages …

The needs and the complexity of modern life make greater demands on this process of information [exchange] than ever before … To live effectively is to live with adequate information. Thus, communication and control belong to the essence of man’s inner life, even as they belong to his life in society.

Wiener was writing a half century ago. Today, the complexity is much magnified and the need for efficient messaging all the greater. Hence man’s rapid embrace of the realtime messaging bus, not only via Twitter but via other increasingly realtime social networks such as FriendFeed (which today announced that realtime messaging “will underlie everything about FriendFeed from now on”) and Facebook (which also recently rolled out a new “realtime” design for its site).

The human benefits are real. The enforced introduction of parsimoniousness into social messaging relieves the pressure of worldly complexity and can provide the sense of well-being that often comes from radical simplification. Vanessa Grigoriadis gives eloquent voice to the benefits of our new cybernetic social system in her cover story on Facebook in the new issue of New York magazine:

On Facebook, I didn’t have to talk to anyone, really, but I didn’t feel alone, and I mean “alone” in the existential use of the word; everyone on Facebook wished me well, which I know not to be the case in the real world; and, most important, there was nothing messy or untoward or unpleasant—the technology controlled human interaction, keeping everyone at a perfect distance, not too close and not too far away, in a zone where I rarely felt weird or lame or like I had said the wrong thing, the way one often feels in the real world. This is the promise of Facebook, the utopian hope for it: the triumph of fellowship; the rise of a unified consciousness; peace through superconnectivity, as rapid bits of information elevate us to the Buddha mind, or at least distract us from whatever problems are at hand. In a time of deep economic, political, and intergenerational despair, social cohesion is the only chance to save the day, and online social networks like Facebook are the best method available for reflecting—or perhaps inspiring—an aesthetic of unity.

It might at this point be suggested that our new transcendentalism is one in which individual human operatives, acting in physical isolation as nodes on a network, achieve the unity of an efficient cybernetic system through the optimized exchange of parsimonious messages over a universal realtime bus.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

U. of Phoenix nixes Twitter U.

Wendy Paul, executive director of public relations for the University of Phoenix, offers an official response to my April Fools post: “University of Phoenix is not going to deliver courses via Twitter. With the limited characters you can post on Twitter, this wouldn’t be a feasible platform for a robust and quality academic curriculum.”

Typical ivory-tower elitist.

Google lifts its skirts

Yesterday was a remarkable day for the small, slightly obsessed band of Google data-center watchers of which I am one. Around each of the company’s sprawling server farms is a high metal fence patrolled by a particularly devoted squad of rent-a-cops, who may or may not be cyborgian in nature. Ordinary humans seeking a peek at the farms have been required to stand at the fence and gaze at the serene exteriors of the buildings, perhaps admiring the way the eponymous clouds of steam rise off the cooling towers in the morning:

steam.jpg

[photo by Toshihiko Katsuda]

Everything inside the buildings was left to the imagination.

No more. Yesterday, without warning, Google lifted its skirts and showed off its crown jewels. (I think you may need to be Scottish to appreciate that rather grotesquely mixed metaphor.) At the company’s Data Center Energy Summit, it showed a video of the computer-packed shipping containers that it confirmed are the building blocks of its centers (proving that Robert X. Cringely was on the money after all), provided all sorts of details about the centers’ operations, and, most shocking of all, showed off one of its legendary homemade servers.

When Rich Miller, of Data Center Knowledge fame, posted a spookily quiet video of the server yesterday – the video looks like a Blair Witch Project outtake – I initially thought it was an April Fools joke:

But then I saw some sketchy notes about the conference that Amazon data-center whiz James Hamilton had posted on his blog, and it started to become clear that it was no joke:

Containers Based Data Center

· Speaker: Jimmy Clidaras

· 45 containers (222KW each/max is 250Kw – 780W/sq ft)

· Showed pictures of containerized data centers

· 300×250’ of container hanger

· 10MW facility

· Water side economizer

· Chiller bybass …

The server pictured in Miller’s video was the real deal – down to the ingeniously bolted-on battery that allows short-term power backup to be distributed among individual servers rather than centralized in big UPS stacks, as is the norm in data-center design.

Now, CNET’s Stephen Shankland provides a further run-down of the Google disclosures, complete with a diagram of the container-based centers and close-up shots of those idiosyncratic servers, the design of which, said Googler Ben Jai, was “our Manhattan Project.”

GoogleServer.jpg

[photo by Stephen Shankland]

I was particularly surprised to learn that Google rented all its data-center space until 2005, when it built its first center. That implies that The Dalles, Oregon, plant (shown in the photo above) was the company’s first official data smelter. Each of Google’s containers holds 1,160 servers, and the facility’s original server building had 45 containers, which means that it probably was running a total of around 52,000 servers. Since The Dalles plant has three server buildings, that means – and here I’m drawing a speculative conclusion – that it might be running around 150,000 servers altogether.

Here are some more details, from Rich Miller’s report:

The Google facility features a “container hanger” filled with 45 containers, with some housed on a second-story balcony. Each shipping container can hold up to 1,160 servers, and uses 250 kilowatts of power, giving the container a power density of more than 780 watts per square foot. Google’s design allows the containers to operate at a temperature of 81 degrees in the hot aisle. Those specs are seen in some advanced designs today, but were rare indeed in 2005 when the facility was built.

Google’s design focused on “power above, water below,” according to [Jimmy] Clidaras, and the racks are actually suspended from the ceiling of the container. The below-floor cooling is pumped into the hot aisle through a raised floor, passes through the racks and is returned via a plenum behind the racks. The cooling fans are variable speed and tightly managed, allowing the fans to run at the lowest speed required to cool the rack at that moment …

[Urs] Holzle said today that Google opted for containers from the start, beginning its prototype work in 2003. At the time, Google housed all of its servers in third-party data centers. “Once we saw that the commercial data center market was going to dry up, it was a natural step to ask whether we should build one,” said Holzle.

I have to confess that I suddenly feel kind of empty. One never fully appreciates the pleasure of a good mystery until it’s uncloaked.

UPDATE: In an illuminating follow-up post, James Hamilton notes that both the data-center design and the server that Google showed off at the meeting are likely several generations behind what Google is doing today. So it looks like the mystery remains at least partially cloaked.

Twitter U.

Realtime is going to college. The University of Phoenix, having pioneered web-based learning and built one of the largest “virtual campuses” in Second Life, is now looking to become the dominant higher-education institution on Twitter. The biggest for-profit university in the world, UoP will roll out this fall a curriculum of courses delivered almost entirely through the microblogging service, according to an article in the new issue of Rolling Stone (not yet posted online). The first set of courses will be in the school’s Business and Management, Technology, and Human Services programs and will allow students to earn “certificates.” But the school plans to rapidly expand the slate of Twitter courses, according to dean of faculty Robert Stanton, and will within three years “offer full degree programs across all our disciplines.” Stanton tells Rolling Stone that Twitter, as a “near-universal, bidirectional communication system,” offers a “powerful pedagogical platform ideally suited to the mobile, fast-paced lives of many of our students.”

Most of the instruction in the Twitter courses will be done through the 140-character “tweets” for which the service is famous, though instructors are also expected to occasionally refer to longer online documents by including “short URL” links in the tweets. “The goal,” says Stanton, “is to keep instruction within the Twitter system to the extent possible. We see the 140-character text limit as more an opportunity than a challenge. It further condenses and democratizes higher education, delivering knowledge and other relevant content to the student in a low-cost and efficient manner.” All examinations will be conducted through exchanges of tweets, according to Stanton.

That sounds bizarre to me, but I admit to being behind the times when it comes to virtual learning. Why not snippetize education? After all, you have to connect with students using the platforms they understand, and things like weighty textbooks and musty classrooms seem increasingly twentieth century.

How many tweets does an earthquake make?

If a tree falls in the woods and no one is around to send a tweet about it, did it really fall?

This is the question I’ve been trying to wrap my head around today, after reading Steve Gillmor’s latest missive from the realtime future (where they speak a somewhat different version of English than we do at present). Gillmor reports on a seismic event that happened near his home earlier today:

This morning I felt a jolt and reached for my iPhone to check in with my wife on the highway. She immediately asked whether it was on Twitter …

Now at first, I have to confess, this struck me as kind of odd. Your spouse calls you to tell you about an earthquake at your house, a potentially catastrophic natural event, and the first thing you say is, “Was it on Twitter?” But then I realized I wasn’t thinking of it from a fully realtime perspective. (I still find myself drifting back to real time now and then.) As soon as I recalibrated my mindset, everything came into focus: In realtime, nothing ever happens firsthand. Reality becomes real only after it has been mediated, encapsulated into an electronic message and shot through a network into a virtual community. The unstreamed life is no life at all.

One thing remained disconcerting, though: Gillmor actually called his wife before checking Twitter.* He appears to have given credence to a mere “jolt,” an unmediated and purely sensory perception. In fact, he says, it took him a full “10 seconds” after his wife’s question before he successfully checked Twitter, at which time he found “three screens of earthquake tweets.” Finally, after unconscionable delay, the earthquake – a three-screener, no less! – had at last been granted entrance to the realm of the real. The tree had fallen.

Oh, Mr. Gillmor, I had looked up to you as my realtime guru, my Maharishi of the Perpetual Status-Update. Now it turns out that – dare I say it? – you have feet of flesh.

_______

*The author suggests that readers not fully familiar with Twitter consult Dan Kennedy’s fairly comprehensive introduction to the popular microblogging service.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

Potemkinpedia

Today’s Sunday Times features an interesting essay on Wikipedia by Noam Cohen, Rough Type’s Journalist of the Week (my last post was inspired by his article on ghosttwittering). Cohen draws an elaborate parallel between Wikipedia and a city:

With its millions of visitors and hundreds of thousands of volunteers, its ever-expanding total of articles and languages spoken, Wikipedia may be the closest thing to a metropolis yet seen online … The search for information resembles a walk through an overbuilt quarter of an ancient capital. You circle around topics on a path that appears to be shifting … Wikipedia encourages contributors to mimic the basic civility, trust, cultural acceptance and self-organizing qualities familiar to any city dweller. Why don’t people attack each other on the way home? Why do they stay in line at the bank? … The police may be an obvious answer. But this misses the compact among city dwellers. Since their creation, cities have had to be accepting of strangers — no judgments — and residents learn to be subtly accommodating, outward looking.

It’s a nice conceit, and not unilluminating.

But Cohen gets carried away by his metaphor. There’s more than a hint of the Potemkin Village in his idealized portrait of Wikipedia:

It is [the site’s] sidewalk-like transparency and collective responsibility that makes Wikipedia as accurate as it is. The greater the foot traffic, the safer the neighborhood. Thus, oddly enough, the more popular, even controversial, an article is, the more likely it is to be accurate and free of vandalism.

Except, well, that’s not entirely true. One of the main reasons that the most popular and most controversial Wikipedia articles have come to be more “accurate and free of vandalism” than they used to be has nothing to do with “sidewalk-like transparency and collective responsibility.” It’s the fact that Wikipedia has imposed editorial controls on those articles, restricting who can edit them. Wikipedia has, to play with Cohen’s metaphor, erected a lot of police barricades, cordoning off large areas of the site and requiring would-be editors to show their government-issued ID cards before passing through.

If, as a stranger, you visit a relatively unpopular and noncontroversial Wikipedia article – like, say, “Toothpick” – you’ll find a welcoming tab at the top that encourages you to “edit this page”:

toothpick.jpg

But if you go to a popular and controversial article, you’ll almost certainly find that the “edit this page” tab is nowhere to be seen (in its place is an arcane “view source” tab). The welcome mat has been removed and replaced by a barricade. Here, for instance, is the page for “George W. Bush”:

bush.jpg

And here’s the page for “Barack Obama”:

obama.jpg

And here’s the page for “Islam”:

islam.jpg

And here’s the page for “Jimmy Wales”:

wales.jpg

And here’s the page for “Britney Spears”:

spears.jpg

And here’s the page for “Sex”:

sex.jpg

You get the picture.

All these pages are what Wikipedia calls “protected,” which means that only certain users are allowed to edit them. The editing of “semi-protected” pages is restricted to “autoconfirmed users” – that is, users who have formally registered on the site and who “pass certain thresholds for age and editcount” – and the editing of “fully protected” pages is limited to official Wikipedia administrators. (Another set of “page titles” are under “creation protection” to prevent them from being created in the first place.) Many of Wikipedia’s most-visited pages are currently under some form of protection, usually semi-protection.

The reason for instituting such controls is, according to Wikipedia, “to prevent vandalism to popular pages.” Accuracy, in other words, requires top-down controls as well as bottom-up collective action. So when Cohen declares that “sidewalk-like transparency and collective responsibility” are what “makes Wikipedia as accurate as it is,” he’s not telling us the whole story. He’s giving us the official Chamber of Commerce view.

Now, for the great majority of people who consult Wikipedia, such distinctions don’t matter. They’re not interested in how the sausage is made. As long as the information is accurate enough and informative enough for their purposes, they’re content. But, as Cohen notes toward the end of his article, arguments about how Wikipedia works involve “a true clash of ideas.” In Wikipedia and other online communities we see both the possibilities and the limitations of “collective responsibility.” And so, when someone raises a Potemkin facade, it’s important to peek behind it.

The shame of it is that Cohen’s metaphor, and article, would have become even richer had he given us the full story of how order is maintained on the crowded streets of Wiki City.

The energy

The great thing about the two-dimensionality of the realtime-realspace continuum is that the sense of intimacy gets disconnected from the act of intimacy. You get the pleasure of the intimate exchange without having to clean up afterwards. No risk, no mess.

In today’s New York Times, Noam Cohen delivers the profoundly unstartling revelation that a lot of celebrities have hired flacks to feed content into their Twitter streams, their blogs, and the various other online channels of faux authenticity. A gentleman named Broadway (not his real name) thumbs tweets for rapper 50 Cent (not his real name), who has nearly a quarter million pseudonymous followers, making him an avatar among avatars. “He doesn’t actually use Twitter,” Broadway says of his famously bullet-puckered boss, “but the energy of it is all him.”

Ah, to be distilled to an essence, to merge into the electron/photon stream. Add this to Baudrillard’s list:

Ecstasy of identity: the energy. More personal than the personal.

Even Owen Thomas, lonely maintainer of the much-reduced Valleywag brand, finds himself waxing philosophical, serving up Baudrillardian mcnuggets:

That’s the grand irony of Twitter: Even the real people on the service are fake. They are their own simulacra. No one actually lives their life 140 characters at a time. What we do is turn ourselves into works of fiction. Who’s real? Who’s not? Who cares?

Simulacrum = avatar = the energy.

The reason Dan Lyons had to quit being Fake Steve Jobs is that Fake Steve Jobs had become more Steve Jobs than Real Steve Jobs. It worked until Real Steve Jobs got sick. That tore a hole in the realtime-realspace continuum – illness is irreducibly physical – and Lyons lost his nerve. The existential nausea that is the lot of the ghostwriter overwhelmed him. He became Real Dan Lyons. Better to be a ghostwriter of the self than of the other. The nausea’s still there, but at least it’s endurable.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.