Terminal days

The “PC is dying” and “Web is dying” tropes that have been bouncing around the meme-o-sphere express a real transformation in the world of computing/media/life. But they express it through the warped retinae of the techno-nostalgist, the high-tech Luddite. The PC and the Web aren’t dying. As cultural forces, they’re more powerful, more inescapable than ever. What the PC and the Web are doing is maturing, the former exploding into a welter of slick consumer appliances, the latter contracting into a corporate-controlled menu of slick services. They’re both assuming what promise to be their stable forms. The high-tech Luddite, or HTL, confuses maturing with dying, because what’s being lost in the maturation process is the thing which the HTL most values, most yearns to protect. Like all Luddites, the HTL is what Kirkpatrick Sale termed a rebel against the future. He wants to arrest progress in order to maintain what he sees as the ideal state of computing, to continue in perpetuity that brief Homebrew interregnum between Mainframe Dominance and Media Dominance. The High-Tech Luddite still thinks Woz will win.

In a new Slashdot essay, the poster known as Unknown Lamer presents the rebel Luddite case under the stirring and quixotic headline “The Greatest Battle of the Personal Computing Revolution Lies Ahead.” He might have called it “Tilting at Clouds.” Unknown Lamer wants to go back — and thinks we can go back — to an idealized “pre-PC” time of truly personal computing, a world in which the user controls the workings of both hardware and software:

You control the data, you control the software; the Personal Computer is a uniquely personal artifact that the user adapts to his own working style. One consequence of this is that creating is as easy (perhaps easier) as consuming content. Another nice side effect is that your data remains private by virtue of local storage.

This is a photoshopped image of the Eden that existed, or almost existed, before Apple started welding its cases shut, before the cloud arrived and started sucking personal data into its vast corporate matrix, before the Wild Web was homogenized, pasteurized and packaged as Facebook, before the return of the dread Terminal. It’s the Eden that would exist again if hobbyist culture suddenly displaced consumer culture as mainstream culture, if everyone suddenly armed themselves with a set of torx screwdrivers.

Unknown Lamer is not blind to reality: “We are staring at a bleak future, and living in a bleak present in some ways.” Yet he still sees the light of the past flickering at the end of the tunnel of the future: “But there is hope for the battle to be won by the Personal Computer instead of the Terminal.” All we need to do, says Unknown Lamer, is to rise up together and overthrow the current cloud paradigm, the current OS paradigm, and the current hardware paradigm. We can bring computing back to human scale by running our own portable virtual servers, replacing corporate OSes with open OSes, regaining local control over our data, overthrowing the systems of The Man. It’s the old Luddite dream, transposed from the early Industrial Era to the early Digital Era. The original Luddites, as Sale explained,

were, like Robin’s Merry Men, victims of progress, or what was held to be progress. Having for centuries worked out of their cottages and small village shops on machines that, though far from simple, could be managed by a single person, assisted perhaps by children, they suddenly saw new, complex, large-scale machines coming into their settled trades, or threatening to, usually housed in the huge multistory buildings rising in their ancient valleys. Worse still, they saw their ordered society of craft and custom and community begin to give way to an intruding industrial society and its new technologies and systems, new principles of merchandise and markets … beyond their ken or control. … They were rebels of a unique kind, rebels against the future that was being assigned to them by the new political economy then taking hold …

Because the Luddites were wrangling over the means of production, they had the people with them, at least for a time. Because the High-Tech Luddites are wrangling over the means of entertainment, they are battling against not only the corporate tide but the populist tide. “Where will we be in ten years?” asks Unknown Lamer. “If Google, Amazon, Apple, and Old Media get their way, in a new dark age of computing. Certainly, you’ll have a fancy tablet and access to infinite entertainment. But you will own nothing.” But a fancy tablet and access to infinite entertainment seem to be exactly what the people want. Ownership, after all, is a nuisance.

My heart’s with you, Unknown Lamer, but history isn’t.

Consciousness expansion in Silicon Valley: a brief history

1965:

The Pranksters are primed in full Prankster regalia. Paul Foster has on his Importancy Coat and now has a huge head of curly hair, a great curly mustache pulling back into great curly mutton chops roaring off his face. Page Browning is the king of face painters. He becomes a full-fledged Devil with a bright orange face and his eyes become the centers of two great silver stars painted over the orange and his hair is silver with silver dust and he paints his lips silver with silver lipstick. This very night the Pranksters all sit down with oil pastel crayons and colored pens and at a wild rate start printing handbills on 8-1/2 x 11 paper saying CAN YOU PASS THE ACID TEST? and giving Big Nig’s address. As the jellybean-cocked masses start pouring out of the Rolling Stones concert at the [San Jose] Civic Auditorium, the Pranksters charge in among them. Orange & silver Devil, wild man in a coat of buttons — Pranksters. Pranksters! — handing out the handbills with the challenge, like some sort of demons, warlocks verily, come to channel the wild pointless energy built up by the Rolling Stones inside.

They come piling into Big Nig’s, and suddenly acid and the Worldcraze were everywhere, the electric organ vibrating through every belly in the place, kids dancing not rock dances, not the frug and the — what?—  swim, mother, but dancing ecstasy, leaping, dervishing, throwing their hands over their heads like Daddy Grace’s own stroked-out inner courtiers — yes! — Roy Seburn’s lights washing past every head, Cassady rapping, Paul Foster handing people weird little things out of his Eccentric Bag, old whistles, tin crickets, burnt keys, spectral plastic handles. Everybody’s eyes turn on like lightbulbs, fuses blow, blackness — wowwww! — the things that shake and vibrate and funnel and freak out in this blackness — and then somebody slaps new fuses in and the old hulk of a house shudders back, the wiring writhing and fragmenting like molting snakes, the organs vibro-massage the belly again, fuses blow, minds scream, heads explode, neighbors call the cops, 200, 300, 400 people from out there drawn into The Movie, into the edge of the pudding at least, a mass closer and higher than any mass in history, it seems most surely, and Kesey makes minute adjustment, small toggle switch here, lubricated with Vaseline No. 634—3 diluted with carbon tetrachloride, and they ripple, Major, ripple, but with meaning, 400 of the attuned multitude headed toward the pudding, the first mass acid experience, the dawn of the Psychedelic, the Flower Generation and all the rest of it . . .

2012:

That shifting mind-set — the idea that life and work must be blended rather than separated — is increasingly common, according to other doctors, scholars who study work habits and the generally well-compensated workers of Silicon Valley like Andrew Sinkov, 31, a vice president of marketing at Evernote, a digital note-taking service. “ ‘Life-work balance’ is a nonsense term,” Mr. Sinkov said. “The idea that I have to segment work and life is based on some archaic lunar-calendar thing.” Given that his employer is paying to clean his apartment, Mr. Sinkov and his girlfriend do not have to quibble about cleanup duties. The value of the perk is greater than the money saved, he said. “It eliminates a decision I have to make,” Mr. Sinkov said. “It’s just happening and it’s good, and I don’t have to think about it.”

His boss, Mr. Libin, also gives employees $1,000 to spend on vacation, but it has to be “a real vacation.” “You can’t visit the in-laws; you have to go somewhere,” Mr. Libin said, adding that he did not see these perks just as ways to keep his work force — and their families — engaged. He said he also tended to be frugal as a chief executive, preferring these types of peace-of-mind benefits to, say, business-class travel, which the company does not pay for. “Happy workers make better products,” he said. “The output we care about has everything to do with your state of mind.”

Which is stronger: information or ignorance?

Over at the Times‘s site, I contribute to a new “Room for Debate” discussion, “Reading More but Learning Less?,” on this question:

In the Web 2.0 age, when many Americans see hundreds of articles every day, are we more informed than previous generations were?

There’s not a whole lot of disagreement among the six contributors. In fact, the pieces seem more like variations on a theme, the theme being something like this: A poorly informed electorate is a cultural problem, and trying to apply a technological solution to it doesn’t seem to do all that much good. The point is made most eloquently, I think, by the historian Melvyn Dubofsky, who provides a glimpse into some of the decidedly low-tech, but nonetheless effective, forms of “social networking” that prevailed in the early years of the twentieth century:

A century ago nearly every city, town, and village had a lyceum or other venue in which visiting speakers regaled packed auditoriums with lectures on popular and abstruse subjects. In Brownsville, Brooklyn, a poor, largely East European Jewish neighborhood, the local Labor Lyceum scheduled talks on Marxism, socialism, anarchism, evolution, and religion as well as performances by talented musicians …

Such venues existed even in such far off places as Lead, S.D., Butte, Mont., and Cripple Creek, Colo., often in conjunction with local trade unions or labor and/or socialist parties. In unionized cigar-making factories in Tampa, Fla. and New York City, lectors, or readers, sat on high stools reading Shakespeare, Marx, Engels, Darwin, Hugo, Balzac, and Tolstoy as cigar rollers performed their skilled work.

When we define the effectiveness of information distribution in terms of its “scale,” we may be looking at the wrong thing.

Here’s a bit from my contribution:

It’s a fallacy to believe that dispensing more information more quickly will, in itself, raise the general level of public awareness. To be informed, a person has to want to be informed, and the percentage of Americans demonstrating such motivation seems to have remained pretty stable, and pretty abysmal, throughout our vaunted information age.

Why not stop the presses?

Noting that newspapers are losing money on their print editions — and that many observers believe that print will eventually die — Alan Jacobs poses what seems to be the obvious question:

But if print is a money-loser — and I keep hearing that is is, for newspaper after newspaper — why not end it now, today, and go purely digital? Why shouldn’t newspapers around the world, or at least in the most internet-saturated parts of the world, just stop the presses — especially if they know they’ll have to do it anyway, and in the meantime the cash is draining away? What are the restraining factors? Habit and tradition? Powerful executives who have known the print world for so long that they can’t imagine life without it? The half-conscious feeling that paper and ink are real in ways that pixels and bits are not, and that if you only have pixels and bits you might as well be just a blogger, without a saleable product you can hold in your hand? This inquiring mind really, really wants to know.

Let me take a crack at the answer. There are a couple of fallacies underlying Jacobs’s question. First, the problem newspapers face is not that “print is a money loser”; it’s that the entire operation is a money-loser — ie, the costs are higher than the revenues. Second, from a production standpoint, you can’t easily separate the “print side” from the “online side.” You can, to a fair degree, separate print revenues (ads and subscriptions) from online revenues (ads and paywall fees); what you can’t separate are the costs. The major cost in a newspaper business is not the cost of printing and distributing the print edition; it’s the cost of producing content — the labor cost that goes into staff reporters, editors, photographers, etc., plus the cost of purchasing content from outside suppliers — and that content-production cost is largely shared between the print and online sides. So if you were to simply discontinue the print edition, you would end up destroying what is still your major source of revenues (print ads and subscriptions), but you wouldn’t do much to reduce your major source of costs. You’d end up sacrificing more revenues than costs, and your financial situation would go from bad to worse. The “cash drain” would turn into a flood. To put it another way: the print edition may well be losing money, but the cash it generates continues to subsidize the production of content for the online side.

So if you stop the presses, you have to do one of two things to avoid financial disaster: (a) figure out a way to make a whole lot more money from the online side or (b) fire a whole lot of people. And since nobody yet knows how to do (a), that means your only option is (b). And then, once you eviscerate your newsroom, you face a new conundrum: you’ve just destroyed your ability to produce good, distinctive content for your online edition. You’re hosed. That, as I see it, is why newspapers continue to be printed.

Discussions about the shift from print to online in the news business tend to focus on the demand side (new patterns of consumption) because that’s where the fun stuff is happening. But, as I’ve argued in the past — here and here — the problem newspapers face lies more on the supply side (old patterns of production). Until the supply problem is resolved at both the macro level (industrywide overcapacity in production) and the micro level (costs outpacing revenues in individual firms), the industry will struggle. And stopping the presses, at this point, will just compound the problem.

Google opens the pod bay doors

Today is a special day. After years of Kremlinesque secrecy, Google has finally opened its data centers to public inspection, at least through a photographer’s lens. There’s beauty here — serene, icy, otherworldly.

Here are racks of servers, bathed in the cold light of a thousand diodes, in a Georgia data center:

Here is a not-so-dusty archive, where all your gmails are stored on backup tapes for posterity, in a South Carolina center (the tall, skinny fellow at the end of the corridor is a robot):

Here’s a look at the cooling system in an Oregon center, the water pipes coded in googley colors (red is hot, blue is cold, and I’m not sure what yellow and green are):

Here’s a view of the interior of an Iowa center (note that a couple of racks seem to be wrapped up to keep them out of view; these must be the ones used to operate Larry Page’s brain):

And, finally, here’s the server floor of a center in Finland. The facility used to be — how perfect is this? — a paper mill:

The photographer is Connie Zhou.

The shock of the old

At Slate, Simon Reynolds offers a challenging, critical assessment of the remix cult, or, as he dubs it, the “recreativity movement.” Here’s a taste:

Still, let’s entertain for a moment the notion that the recreativity believers are right: that innovation is an obsolete and unhelpful notion and that the curatorial, informationalized model of art is where things are at. A few years ago William Gibson opined, via Twitter, that “less creative people believe in ‘originality’ and ‘innovation,’ two basically misleading but culturally very powerful concepts.” Forget for the moment that Gibson would appear to be rather an original writer, an innovator in his field. What’s relevant here is that he is characterizing as false consciousness the mindset that powered everything from 20th-century modernism to the most dynamic eras of popular music. […] Even today, evidence would suggest that artists, writers, and musicians who labor under the misconception that it’s possible to come up with something new under the sun are much more likely to try for that and thus stand a better chance of reaching it. Perhaps it would be better if we continued to be “misled”! Whereas the ideology of recreativity, as it spreads, not only legitimizes lazy, parasitic work, it actively encourages it by making it seem cool, “timely,” somehow more advanced than that quaint middlebrow belief in the shock of the new.

As much as it is propaganda in favor of underachievement, recreativity is also, I suspect, a form of solace: reassuring balm for the anxiety of overinfluence, the creeping fear that one might not have anything of one’s own to offer. The achievements of a great composer or a great band (such as Led Zeppelin, a target of Everything Is A Remix’s Kirby Ferguson) seem less imposing if you can point to their debts and derivations. Part of the appeal of standing on the shoulders of giants is that it makes the giants seem smaller.

Reynolds is on target here, and elsewhere. My only grouse is that he’s a little too quick to lump people into the remix camp, in a way that blurs some important distinctions. In surveying “proponents of recreativity,” he includes, for instance, David Shields, whose book Reality Hunger incorporated, sneakily, the words of many earlier writers. In its celebration of allusiveness, Reality Hunger was, it strikes me, a critique of the dismembering impulse that characterizes the remix cult: the desire to see an original work as only a patchwork of “sources.” In its most childish form, this desire manifests itself in a gleeful rush — all too common today — to see allusion as plagiarism; the critical faculty is overrun by the logic of the search engine. (If in listening to Led Zeppelin you’re only able to hear Robert Johnson, there’s not something wrong with Led Zeppelin; there’s something wrong with you.) Much to Shields’s dismay, his publisher required him to make his allusions explicit by including a list of sources at the end of the book, essentially kow-towing to the tiresome sensibilities of the remixologists.

And, though I’m wary of parsing tweets, the William Gibson remark that Reynolds mentions seems more subtle than Reynolds gives it credit for. Gibson is criticizing the worship of the “now factor” that characterizes the self-consciously avant garde, of which the remix brigade is just the latest front. (As Reynolds suggests, a good bit of the appeal of remix rhetoric lies, paradoxically, in the way it makes its users feel “original” and “creative” and oh so “now.”) Far from dismissing the creativity of the modernists, steeped as it was in historical consciousness, Gibson was, I think, applauding it. The modernists were nothing if not allusionists, for whom the shock of the new could only be expressed by someone with a deep understanding of the shock of the old. Eliot’s technique in The Waste Land was not dissimilar to Shields’s in Reality Hunger.

When Pound told artists to “Make it new,” he wasn’t diminishing the importance of what came before, or its usefulness to the contemporary enterprise of creation, but he was emphasizing its insufficiency. For the real artist, the cultural past is, like nature, raw material, which is remade, not merely remixed, by the pressure of individual talent. The problem with “the curatorial, informationalized model of art” is that it wants to reduce inspiration to derivation, allusion to citation, originality to recombination, art to data processing — and, yes, to make giants seem smaller. It’s a problem that has more to do with a misperception of talent than with a retrospective gaze.

A post on the occasion of Facebook’s billionth member

“Heaven is a place where nothing ever happens.” —Talking Heads

Let’s say, for the sake of argument, that David Byrne was correct and that the distinguishing characteristic of paradise is the absence of event, the total nonexistence of the new. Everything is beautifully, perfectly unflummoxed. If we further assume that hell is the opposite of heaven, then the distinguishing characteristic of hell is unrelenting eventfulness, the constant, unceasing arrival of the new. Hell is a place where something always happens. One would have to conclude, on that basis, that the great enterprise of our time is the creation of hell on earth. Every new smartphone should have, affixed to its screen, one of those transparent, peel-off stickers on which is written, “Abandon hope, ye who enter here.”

Maybe I’m making too many assumptions. But I was intrigued by Tom Simonite’s report today on the strides Google is making in the creation of neural nets that can actually learn useful things. The technology, it’s true, remains in its early infancy, but it appears at least to be post-fetal. It’s not at the level of, say, a one-and-a-half-year-old child who points at an image of a cat in a book and says “cat,” but it’s sort of in that general neighborhood. “Google’s engineers have found ways to put more computing power behind [machine learning] than was previously possible,” writes Simonite, “creating neural networks that can learn without human assistance and are robust enough to be used commercially, not just as research demonstrations. The company’s neural networks decide for themselves which features of data to pay attention to, and which patterns matter, rather than having humans decide that, say, colors and particular shapes are of interest to software trying to identify objects.”

The company has begun applying its neural nets to speech-recognition and image-recognition tasks. And, according to Google engineer Jeff Dean, the technology can already outperform people at some jobs:

“We are seeing better than human-level performance in some visual tasks,” he says, giving the example of labeling, where house numbers appear in photos taken by Google’s Street View car, a job that used to be farmed out to many humans. “They’re starting to use neural nets to decide whether a patch [in an image] is a house number or not,” says Dean, and they turn out to perform better than humans.

But the real advantage of a neural net in such work, Dean goes on to say, probably has less to do with any real “intelligence” than with the machine’s utter inability to experience boredom. “It’s probably that [the task is] not very exciting, and a computer never gets tired,” he says. Comments Simonite, sagely: “It takes real intelligence to get bored.”

Forget the Turing Test. We’ll know that computers are really smart when computers start getting bored. If you assign a computer a profoundly tedious task like spotting house numbers in video images, and then you come back a couple of hours later and find that the computer is checking its Facebook feed or surfing porn, then you’ll know that artificial intelligence has truly arrived.

There’s another angle here, though. As many have pointed out, one thing that networked computers are supremely good at is preventing their users from experiencing boredom. A smartphone is the most perfect boredom-eradication device ever created. (Some might argue that smartphones don’t so much eradicate boredom as lend to boredom an illusion of excitement, but that’s probably just semantics.) To put it another way, what networked computers are doing is stealing from humans one of the essential markers of human intelligence: the capacity to experience boredom.

And that brings us back to the Talking Heads. For the non-artificially intelligent, boredom is not an end-state; it’s a portal to transcendence — a way out of quotidian eventfulness and into some higher state of consciousness. Heaven is a place where nothing ever happens, but that’s a place that the computer, and, as it turns out, the computer-enabled human, can never visit. In hell, the house numbers, or their equivalents, never stop coming, and we never stop being amused by them.