I was offline before offline was offline

“There are these two young fish swimming along, and they happen to meet an older fish swimming the other way, who nods at them and says, ‘Morning, boys, how’s the water?’ And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, ‘What the hell is water?'” [source]

A couple of cavemen are walking through the woods. One sighs happily and says to the other, “I’m telling you, there’s nothing like being out in nature.” The other pauses and says, “What’s nature?”

It’s 1972. A pair of lovers go camping in a wilderness area in a national park. They’re sitting by a campfire, taking in the evening breezes. “Honey,” says the woman, “I have to confess I really love being offline.” The guy looks at her and says, “What’s offline?”

Continuing the discussion of Nathan Jurgenson’s essay on the fraught relationship between “the offline” and “the online” — I said my piece here, Michael Sacasas made his points here — Adam Graber observes:

People have always had awkward dynamics with other people, but […] books gave us a new metaphor to describe the experience of not being on the same page. Similarly, the Internet has given us new ways of thinking about experiences we’ve always had.
Yet, things aren’t exactly the same after the metaphor either. With our new awareness, our perspective has changed. We’re faced with a new normal. This is how technology changes us. It alters our perspective and our perception. We see the world in a new way.

That’s right, and it’s worth emphasizing the point because most people don’t really want to acknowledge it. If you advocate even a mild form of technological determinism, you tend to get an immediate and very dismissive reaction: “Tools don’t do anything. It’s how we use them that matters.” The reaction is an expression of what McLuhan termed “somnambulism,” and it seems to be our default mode. We hate the idea that we’re not in control, not driving the car. If a technology has some effect on us, we tell ourselves, it’s because we chose for it to have that effect.

But the fact that we now consciously experience two different states of being called “online” and “offline,” which didn’t even exist a few years ago, shows how deeply technology can influence not only what we do but how we perceive ourselves and the world. Certainly we didn’t consciously choose to look at our lives in this way and then formulate the technology to fulfill our desire. The defense contractors who started building the internet didn’t say to each other, “For the good of mankind, let’s create a new dichotomy in perception.” And when we, as individuals, log on for the first time (or the ten-thousandth time), we don’t say to ourselves, “I’m going to use this new technology so I’ll be able to think about my life in terms of being online and being offline.” But that’s what happens.

It’s not that technology “wants” us to think in this way — technology doesn’t want a damn thing — it’s that technology has side effects that are unintended, unimagined, unplanned-for, unchosen, often invisible, and frequently profound. Technology gave us nature, as its shadow, and in a similar way it has given us “the offline.”

I’m with stupid

With the digital computer, we have created a machine that we can program not only to help us but to trick us – the greatest of all tricksters, perhaps, because it hoodwinks us about what is most central to who we are: the nature of our thought, the way we make sense. In The Stupidity of Computers, a new article in n+1, David Auerbach describes the nature of the trickery and our complicity in it.

A bit:

The dissemination of information on the web does not liberate information from top-down taxonomies. It reifies those taxonomies. Computers do not invent new categories; they make use of the ones we give them, warts and all. And the increasing amount of information they process can easily fool us into thinking that the underlying categories they use are not just a model of reality, but reality itself. […]

We will increasingly see ourselves in terms of these ontologies and willingly try to conform to them. This will bring about a flattening of the self—a reversal of the expansion of the self that occurred over the last several hundred years. While in the 20th century people came to see themselves as empty existential vessels, without a commitment to any particular internal essence, they will now see themselves as contingently but definitively embodying types derived from the overriding ontologies. This is as close to a solution to the modernist problem of the self as we will get.

If and when the Turing Test is finally passed, it probably won’t mean that computers have learned what it is to be human. It will probably mean that we’ve forgotten.

RELATED: In American Scientist, Brian Hayes provides a lucid overview of the way A.I. research has changed in recent years, using three case studies – checkers, translation, and question-answering – to illustrate the shift in strategy away from subtle mind-replication (not very effective) and toward brute-force data-crunching (sometimes startlingly effective). Stupid is smart in its own way.

“Let’s go.” “Why not.”

Damn. It just struck me that with Ernest Borgnine’s death, the entire Wild Bunch is gone. The others were William Holden, Ben Johnson, and the inimitable Warren Oates. They won’t be back.

Adios.

Faceborg

Matthew Berk offers some interesting numbers about Facebook’s increasing grip on the web. As he points out, Facebook’s extraordinary growth is part of a bigger story about a basic shift in the structure of the web. “In the old world,” he writes, the web was made up largely of “pages and sites” that were “about” other pages and sites, and “search engines used calculations based on the link as a key signal of network-wide relevance.” But now, as more people’s use of the web comes to be mediated by proprietary social-networking services (often delivered through apps on mobile devices), pages “are becoming ever more incidental,” and APIs managed by those private services are displacing links as the connections that shape the structure (or “graph”) of the web and our experience of it.

Berk did an extensive crawl of the web and found that 22 percent of all pages now contain Facebook URLs, a number he senses is “rising, and fast.” When you consider the vastness of the web, and how long its sites have been proliferating, that’s a striking figure. As Berk observes, “it’s taken roughly a decade for Facebook to not only accrue roughly a billion users, but to entangle itself in about a fifth of the Web.” Even more striking, and troubling for anyone concerned about the web’s future as an open, popular network, is Berk’s finding about the way in which Facebook is entangling itself in the web:

Although about a fifth of the Web (based on this sample) references Facebook, and despite there being close to half a billion references to Facebook URLs, there are only 3.5 million unique URLs in the sample set. The bulk of these are for Facebook-specified integrations (those that add social dimension to a Web site), as opposed to specific inbound URLs. My key takeaway here is that although Facebook may know about a sizable portion of the Web, the Web barely knows anything about what’s inside of Facebook.

Facebook loves to talk about “sharing,” but that hardly seems to be its business strategy. What Facebook wants is control. It’s like a giant castle with high walls and a deep moat — and spies everywhere.

The nepotistic linker

Mathew Ingram, GigaOM’s media blogger, gave one of his semiyearly lectures on the sanctity of the hyperlink yesterday. Linking is “a core value of the web.” Links are “the currency of the collaborative web.” Links are “one of the crucial underpinnings of the internet and the web.” Links are “the lifeblood of the internet.” Etc.

Ingram certainly crams a lot of links into his post. The net’s lifeblood squirts out all over the place. But I did a little forensic examination of his links, and I discovered that fully 42% of them point to other GigaOM articles. That’s right: Nearly half of all the links in Ingram’s story about the sanctity of links are self-links! If links are the currency of the web, Ingram should be jailed for nepotism.

What spurred Ingram’s post is his annoyance at media outlets that, in his estimation, fail to provide a link to the original source of a story. He sees internet hyperlinking as an elaborate intrajournalistic tribute system, a mechanism through which pixel-stained wretches credit and track scoops. Being meticulous in issuing scoop-links is “a principle that distinguishes ethical outlets from unethical ones.” He complains, for instance, that after GigaOM recently “broke a story” on a patent fight, “several outlets covered the same news without providing a link to our post on it.” The nerve!

Now, I’m sure that the painstaking monitoring of scoop-links is a very, very important activity in some obscure corner of the universe, but for most people the real value of links, as a form of currency, lies in the way they can encapsulate a personal assessment of the worth of a piece of content on the net — a webpage, or a blog post, or a YouTube video, or whatever. A truly valuable link isn’t some routine, automatic token of credit; it represents a careful, conscious expression of personal judgment. In its original form, Google worked because links meant something. If you could trust the sincerity of links, you could count them up and have a reliable indicator of collective wisdom.

Those days are gone. Meaningful links are still out there, of course, but they’ve been overwhelmed by spam links, lazy links, automatic links, SEO links, promotional links, and, yes, self-links. The good links have been crowded out by all the links that exist for ulterior, usually self-serving purposes — that have nothing to do with one human being making a careful assessment of the value of the work of another human being. The currency has been debased. That’s why Google now has to evaluate something like 200 different signals to rank search results. Links are far less reliable than they used to be.

It’s silly to get riled up about a commercial publication linking to its own content. That’s just business. It was always going to happen, and it happened. Not long ago self-linking was controversial; now it’s pretty much invisible. But to climb up on a high horse and criticize others for failing to issue scoop-links while you yourself are engaging in rampant self-linking is a bit rich. Self-linking has undermined the currency of the web to a far greater extent than has the occasional omission, accidental or deliberate, of a scoop-link.

Ingram suggests that outlets may avoid handing out scoop-links because “the financial model for digital media — that is, advertising — relies on page views, and one of the ways to juice those numbers is to pretend you broke a story. But regardless of whether this inflates reader numbers in the short term, it ultimately depreciates the value of the blog that does it, and that leads to a loss of trust.” He could, of course, have leveled pretty much the same charge against his own nepotistic linking. Every time you self-link to a GigaOM post, Mathew Ingram, an angel dies.

Storms and doldrums

In the latest episode of On the Media, the public radio show, I chat with Bob Garfield about the storm that took down one of Amazon’s data centers and, with it, Netflix, Pinterest, and Instagram. Here’s the segment:

Speaking of Instagram, I have a short essay in this weekend’s Wall Street Journal that speculates on what’s behind the underwhelming state of innovation in America today. The piece begins:

When Facebook’s Mark Zuckerberg announced in April that his company would pay $1 billion in cash and stock to buy Instagram, the deal put an exclamation mark on the shrinking ambitions of our inventors and entrepreneurs. Instagram has 13 employees and zero revenues. Its claim to fame is a free smartphone app that reformats photographs to look as if they were taken by an old Kodak Instamatic. Providing yet another means for people to fiddle with snapshots is super, but it’s hardly a moonshot. …

I introduced the idea of a “hierarchy of innovation” in a post in May. Here, as a reminder, is what my proposed hierarchy looks like (click on the image to enlarge it):
hierarchy of innovation.jpg

The line between offline and online

This post, along with seventy-eight others, is collected in the book Utopia Is Creepy.

Over at The New Inquiry, Nathan Jurgenson, a graduate student in sociology, has a captivating essay called The IRL Fetish (IRL is net slang for “in real life”), which argues that, far from alienating us from unmediated experience, from “real life,” as it’s quaintly known, the net has actually deepened our appreciation of “the offline” — to the point, in fact, where appreciation has turned into fetishistic obsession. The piece is crisply written, sharply argued, and fundamentally wrongheaded.

Jurgenson begins by describing what he grants is an ever deepening “intrusion” of digital media into the most intimate spheres of our lives:

Hanging out with friends and family increasingly means also hanging out with their technology. While eating, defecating, or resting in our beds, we are rubbing on our glowing rectangles, seemingly lost within the infostream.

Where’s the Lysol?

But it’s not just that we’re spending so much time online, Jurgenson notes, perceptively. It’s that “the logic” of social networks and other online sites and services “has burrowed far into our consciousness.” Software shapes not only our lives but our beings. The saturation of “real life” with “digital potential,” he continues, has spawned a backlash against the net’s hegemony. He gives a quick summary of the argument of the critics: “Given the addictive appeal of the infostream, the masses have traded real connection for the virtual.” We can’t eat a meal with friends or loved ones without also dining on data from our smartphones. The backlash, Jurgenson suggests, is gaining momentum: “Writer after writer laments the loss of … sensory peace in this age of always-on information, omnipresent illuminated screens, and near-constant self-documentation.”

Then, not exactly out of the blue, comes the Big But (the first of two, actually):

But as the proliferation of such essays and books suggest [sic], we are far from forgetting about the offline; rather we have become obsessed with being offline more than ever before. We have never appreciated a solitary stroll, a camping trip, a face-to-face chat with friends, or even our boredom better than we do now. Nothing has contributed more to our collective appreciation for being logged off and technologically disconnected than the very technologies of connection. The ease of digital distraction has made us appreciate solitude with a new intensity. We savor being face-to-face with a small group of friends or family in one place and one time far more thanks to the digital sociality that so fluidly rearranges the rules of time and space. In short, we’ve never cherished being alone, valued introspection, and treasured information disconnection more than we do now. Never has being disconnected — even if for just a moment — felt so profound.

When we are pummeled so relentlessly with the first person plural, we get antsy. We begin to suspect that words are being shoved into our mouths and thoughts into our heads, that our sensibilities are being poured into a mold of someone else’s fashioning. Such suspicions are more than warranted here.

You might say that Jurgenson is just stating the obvious, reprising the old Joni Mitchell refrain: “you don’t know what you’ve got till it’s gone.” A really thirsty man will appreciate a glass of water more than an amply hydrated man. But instead of arriving at the obvious conclusion — that being amply hydrated is better than being really thirsty — Jurgenson gives it a wrenching spin. The sense of loss that comes with being hyper-mediated, he wants us to believe, is actually a sign of gain. “Nothing has contributed more to our collective appreciation for being logged off and technologically disconnected than the very technologies of connection.” That sip of water was amazing! Thank god I’m parched! I guess you can’t blame a guy for looking at the bright side, but while it’s true that having less of a precious thing makes that precious thing seem all the more precious, that hardly means we should applaud the loss. The yearning for something slipping from our grasp should be taken as a warning, not a cause for celebration.

But there are deeper problems. What are we to make of this: “We have never appreciated a solitary stroll, a camping trip, a face-to-face chat with friends, or even our boredom better than we do now.” That’s the kind of sweeping statement that would benefit from a little evidence. A brief backward glance at the history of philosophy, literature, art, or even just nature photography will tell you that there have been plenty of folks that have had a very deep — indeed, profound — appreciation of the beauties and restorative capacities of solitude, nature, and “face-to-face” chats with friends. I’m going to resist the temptation to quote some Wordsworth or Thoreau, but I will say while our present age may be tops in some things, it’s far from tops in the area of solitary strolls. The real tragedy — if in fact you see it as a tragedy, and most people do not — is that the solitary stroll, the camping trip, the gabfest with pals are themselves becoming saturated with digital ephemera. Even if we agree to turn off our gadgets for a spell, they remain ghostly presences — all those missed messages hang like apparitions in the air, taunting us — and that serves to separate us from the experience we seek. What we appreciate in such circumstances, what we might even obsess over, is an absence, not a presence.

And then, more out of the blue, comes the second Big But: Jurgenson doesn’t even want to grant us license to recognize the absence as an absence, to pay tribute to, much less seek to regain a piece of, what’s been lost. When we do that, we’re merely “fetishizing the offline.” We’re indulging a reverence for something that, apparently, never really existed. “It is the fetish objects of the offline and the disconnected that are not real,” he concludes, his argument becoming a tangle of abstractions. “Those who mourn the loss of the offline are blind to its prominence online.” No, actually, they’re not. One need not subscribe to what Jurgenson calls “digital dualism” — “the habit of viewing the online and offline as largely distinct” — to believe that there are real losses involved when we enter an environment mediated by ever-beckoning computer screens and saturated with data. Of course “the online” is now as much a part of real life as “the offline” — the human world has always been, to borrow Walter Ong’s term, technologized — but the fact that they’re blurring, and blurring quickly, should spur us to examine, critically, the consequences of that blurring, not to conclude that the blurring turns a real distinction into a fiction, as if when you whisk oil and vinegar into a salad dressing, you whisk oil and vinegar out of existence. To exaggerate a distinction is a lesser crime than to pretend it doesn’t exist at all.

UPDATE (7/4): Another grad student, Michael Sacasas, offers an Arcadian critique of Jurgenson’s essay. Here, Sacasas looks at “the claim that ‘offline experience’ is proliferating”:

What I suspect Jurgenson means here is that awareness of offline experience and a certain posture toward offline experience is proliferating. And this does seem to be the case. Semantically, it would have to be. The notion of the offline as “real” depends on the notion of the online; it would not have emerged apart from the advent of the online. […]

It remains the case, however, that “offline,” only recently constituted as a concept, describes an experience that paradoxically recedes as it comes into view. Consequently, Jurgenson’s later assertion – “There was and is no offline … it has always been a phantom.” – is only partially true. In the sense that there was no concept of the offline apart from the online and that the online, once it appears, always penetrates the offline, then yes, it is true enough. However, this does not negate the fact that while there was no concept of the offline prior to the appearance of the online, there did exist a form of life that we can retrospectively label as offline. There was, therefore, an offline (even if it wasn’t known as such) experience realized in the past against which present online/offline experience can be compared.

What the comparison reveals is that a form of consciousness, a mode of human experience is being lost. It is not unreasonable to mourn its passing, and perhaps even to resist it.

That’s clarifying. The concept of “offline” came into existence at precisely the same moment as the concept of “online,” which means that, as a concept, “offline” can only exist in the shadow of “online” and hence is inextricable from “offline” (as Jurgenson, in a sense, argues). But when we use the word “offline,” what we’re often actually doing, as Sarcasas observes, is referring to a state of being that existed prior to the arrival of “online” — a state that is, or at least was, real and that is, or was, very different from our current state of “online/offline interpenetration.” The very existence of the online/offline dichotomy suggests the extent of the net’s influence on the way we perceive the world.

Human reality has always been augmented by technology, but each new augmentation changes the nature of the augmentation and hence of reality. So to say that reality has always been augmented is to say something both obvious and meaningless.