Category Archives: Uncategorized

Facebook’s identity lock-in

“You’re invisible now, you’ve got no secrets to conceal.” -Bob Dylan

Facebook CEO Mark Zuckerberg has a knack for making statements that are at once sweeping and silly, but he outdoes himself with this one:

You have one identity … Having two identities for yourself is an example of a lack of integrity.

This is, at the obvious level, a clever and cynical ploy to recast the debate about Facebook’s ongoing efforts to chip away at its members’ privacy safeguards. Facebook, Zuckerberg implies, isn’t compromising your privacy by selling personal data to corporations; it is making you a better person. By forcing you, through its imposition of what it calls “radical transparency,” to have “one identity,” it is also imposing integrity on you. We should all be grateful that we have Zuck to act as our personal character trainer, I guess.

Zuckerberg’s self-servingly cavalier attitude toward other people’s privacy has provoked a firestorm of criticism over the last couple of weeks. Whether or not a critical mass of Facebook members actually care enough about online privacy to force Facebook to fundamentally shift its policies remains to be seen. Up to now, as I’ve pointed out in the past, Facebook’s strategy for turning identity into a commodity has consisted of taking two steps forward and then, when confronted with public resistance, apologizing profusely before taking one step back. I suspect that’s what will happen again – and again, and again.

But that’s not the subject of this post. Zuckerberg’s “one identity” proclamation reminded me of something I heard Jaron Lanier say in a recent lecture. He was talking about the way that Facebook, and other social networking sites, serves as a permanent public record of our lives. That’s great in a lot of ways – it gives us new ways to express ourselves, socialize, cement and maintain friendships. But there’s a dark side, too. Lanier pointed to the example of Bob Dylan. After growing up, as Robert Zimmerman, in Hibbing, Minnesota, Dylan shucked off his youthful identity, like a caterpillar in a chrysalis, and turned himself into the mysterious young troubador Bob Dylan in New York City. It was a great act of self-reinvention, a necessary first step in a career of enormous artistic achievement. Indeed, it’s impossible to imagine the kid Zimmerman becoming the artist Dylan without that clean break from the past, without, as Zuckerberg would see it, the exercise of a profound lack of “integrity.”

Imagine, Lanier said, a young Zimmerman trying to turn himself into Dylan today. Forget it. He would be trailing his online identity – his “one identity” – all the way from Hibbing to Manhattan. “There’s that goofy Zimmerman kid from Minnesota,” would be the recurring word on the street in Greenwich Village. The caterpillar Zimmerman, locked into his early identity by myriad indelible photos, messages, profiles, friends, and “likes” plastered across the Web, would remain the caterpillar Zimmerman. Forever.

More insidious than Facebook’s data lock-in is its identity lock-in. The invisibility that Dylan describes at the end of “Like a Rolling Stone,” where you’re free of your secrets, of your past life, is a necessary precondition for personal reinvention. As Robert Zimmerman traveled from Hibbing to New York, he first became invisible – and then he became Bob Dylan. In the future, such acts of transformation may well become impossible. Facebook saddles the young with what Zuckerberg calls “one identity.” You can never escape your past. The frontier of invisibility is replaced by the cage of transparency.

Long player: super deluxe limited-edition reissue

A correspondent, noting the imminent re-re-re-release, in several analogue and digital formats at escalating price points, of the Rolling Stones masterwork Exile on Main Street, suggests that I issue my own re-release of my 2007 post Long Player, which was inspired, in part, by the Stones record and which, as it happens, I wrote in the cellar of a villa in the south of France. I was thinking of hiring a crackerjack blogsman to remix the post – Doc Searls, perhaps – but in rereading it I realized that the original mix has a certain distinctive quality that, whatever its flaws, captures the spirit of the heady times in which it was composed. Get down:

I started reading David Weinberger’s new book, Everything Is Miscellaneous, this weekend. I’d been looking forward to it. Weinberger has a supple, curious mind and an easy way with words. Even though I rarely agree with his conclusions, he gets the brain moving – and that’s what matters. But I have to say I didn’t get very far in the book, at least not this weekend. In fact, I only reached the bottom of page nine, at which point I crashed into this passage about music:

For decades we’ve been buying albums. We thought it was for artistic reasons, but it was really because the economics of the physical world required it: Bundling songs into long-playing albums lowered the production, marketing, and distribution costs because there were fewer records to make, ship, shelve, categorize, alphabetize, and inventory. As soon as music went digital, we learned that the natural unit of music is the track. Thus was iTunes born, a miscellaneous pile of 3.5 million songs from a thousand record labels. Anyone can offer music there without first having to get the permission of a record executive.

“… the natural unit of music is the track”? Well, roll over, Beethoven, and tell Tchaikovsky the news.

There’s a lot going on in that brief passage, and almost all of it is wrong. Weinberger does do a good job, though, of condensing into a few sentences what might be called the liberation mythology of the internet. This mythology is founded on a sweeping historical revisionism that conjures up an imaginary predigital world – a world of profound physical and economic constraints – from which the web is now liberating us. We were enslaved, and now we are saved. In a bizarrely fanciful twist, the digital world is presented as a “natural” counterpoint to the supposed artificiality of the physical world.

I set the book aside and fell to pondering. Actually, the first thing I did was to sweep the junk off the dust cover of my sadly neglected turntable and pull out an example of one of those old, maligned “long-playing albums” from my shrunken collection of cardboard-sheathed LPs (arrayed alphabetically, by artist, on a shelf in a cabinet). I chose Exile on Main Street. More particularly, I chose the unnatural bundle of tracks to be found on side three of Exile on Main Street. Carefully holding the thin black slab of scratched, slightly warped, but still serviceable vinyl by its edges – you won’t, I trust, begrudge me a pang of nostalgia for the outdated physical world – I eased it onto the spindle and set the platter to spinning at a steady thirty-three-and-a-third revolutions per minute.

Now, if you’re not familiar with Exile on Main Street, or if you know it only in a debauched digital form – whether as a single-sided plastic CD (yuk) or as a pile of miscellaneous undersampled iTunes tracks (yuk squared) – let me explain that side three is the strangest yet the most crucial of the four sides of the Stones’ double-record masterpiece. The side begins, literally, in happiness – or Happyness – and ends, figuratively, in a dark night of the soul. (I realize that, today, it’s hard to imagine Mick Jagger having a dark night of the soul, but at the dawn of the gruesome seventies, with the wounds of Brian Jones’s death, Marianne Faithfull’s overdose, and Altamont’s hippie apocalypse still fresh in his psyche, Mick was, I imagine, suffering from an existential pain that neither a needle and a spoon nor even another girl could fully take away.)

But it’s the middle tracks of the platter that seem most pertinent to me in thinking about Weinberger’s argument. Between Keith’s ecstatic, grinning-at-death “Happy” and Mick’s desperate, shut-the-lights “Let It Loose” come three offhand, wasted-in-the-basement songs – “Turd on the Run,” “Ventilator Blues,” and “Just Wanna See His Face” – that sound, in isolation, like throwaways. If you unbundled Exile and tossed these tracks onto the miscellaneous iTunes pile, they’d sink, probably without a trace. I mean, who’s going to buy “Turd on the Run” as a standalone track? And yet, in the context of the album that is Exile on Main Street, the three songs achieve a remarkable, tortured eloquence. They become necessary. They transcend their identity as tracks, and they become part of something larger. They become art.

Listening to Exile, or to any number of other long-playing bundles – The Velvet Underground & Nico, Revolver, Astral Weeks, Every Picture Tells a Story, Mott, Blood on the Tracks, Station to Station, London Calling, Get Happy!, Murmur, Tim (the list, thankfully, goes on and on) – I could almost convince myself that the 20-minute-or-so side of an LP is not just some ungainly byproduct of the economics of the physical world but rather the “natural unit of music.” As “natural” a unit, anyway, as the individual track.

The long-playing phonograph record, twelve inches in diameter and spinning at a lazy 33 rpm, is, even today, a fairly recent technological development. (In fact, recorded music in general is a fairly recent technological development.) After a few failed attempts to produce a long-player in the early thirties, the modern LP was introduced in 1948 by a record executive named Edward Wallerstein, then the president of Columbia Records, a division of William Paley’s giant Columbia Broadcasting System. At the time, the dominant phonograph record had for about a half century been the 78 – a fragile, ten-inch shellac disk that spun at seventy-eight rpm and could hold only about three or four minutes of music on a side.

Wallerstein, being a record executive, invented the long-player as a way to “bundle” a lot of tracks onto a single disk in order to enhance the economics of the business and force customers to buy a bunch of songs that they didn’t want to get a track or two that they did want. Right? Wrong. Wallerstein in fact invented the long-player because he wanted a format that would do justice to performances of classical works, which, needless to say, didn’t lend themselves all that well to three-minute snippets.

Before his death in 1970, Wallerstein recalled how he pushed a team of talented Columbia engineers to develop the modern record album (as well as a practical system for playing it):

Every two months there were meetings of the Columbia Records people and Bill Paley at CBS. [Jim] Hunter, Columbia’s production director, and I were always there, and the engineering team would present anything that might have developed. Toward the end of 1946, the engineers let Adrian Murphy, who was their technical contact man at CBS, know that they had something to demonstrate. It was a long-playing record that lasted seven or eight minutes, and I immediately said, “Well, that’s not a long-playing record.” They then got it to ten or twelve minutes, and that didn’t make it either. This went on for at least two years.

Mr. Paley, I think, got a little sore at me, because I kept saying, “That’s not a long-playing record,” and he asked, “Well, Ted, what in hell is a long-playing record?” I said, “Give me a week, and I’ll tell you.”

I timed I don’t know how many works in the classical repertory and came up with a figure of seventeen minutes to a side. This would enable about 90% of all classical music to be put on two sides of a record. The engineers went back to their laboratories. When we met in the fall of 1947 the team brought in the seventeen-minute record.

The long-player was not, in other words, a commercial contrivance aimed at bundling together popular songs to the advantage of record companies and the disadvantage of consumers; it was a format specifically designed to provide people with a much better way to listen to recordings of classical works. In fact, in focusing on perfecting a medium for classical performances, Columbia actually sacrificed much of the pop market to its rival RCA, which at the time was developing a competing record format: the seven-inch, forty-five-revolutions-per-minute single. Recalls Wallerstein:

There was a long discussion as to whether we should move right in [to the market with the LP] or first do some development work on better equipment for playing these records or, most important, do some development work on a popular record to match these 12-inch classical discs. Up to now our thinking had been geared completely to the classical market rather than to the two- or three-minute pop disc market.

I was in favor of waiting a year or so to solve these problems and to improve the original product. We could have developed a 6- or 7-inch record and equipment to handle the various sizes for pops. But Paley felt that, since we had put $250,000 into the LP, it should be launched as it was. So we didn’t wait and in consequence lost the pops market to the RCA 45s.

A brief standards war ensued between the LP and the 45 – it was called “the battle of speeds” – which concluded, fortunately, with a technological compromise that allowed both to flourish. Record players were designed to accommodate both 33 rpm albums and 45 rpm singles (and, for a while, anyway, the old 78s as well). The 45 format allowed consumers to buy popular individual songs for a relatively low price, while the LP provided them with the option of buying longer works for a somewhat higher price. Of course, popular music soon moved onto LPs, as musicians and record companies sought to maximize their sales and provide fans with more songs by their favorite artists. The introduction of the pop LP did not force customers to buy more songs than they wanted – they could still cherry-pick individual tracks by buying 45s. The LP expanded people’s choices, giving them more of the music they clamored for.

Indeed, in suggesting that the long-player resulted in a big pile of “natural” tracks being bundled together into artificial albums, Weinberger gets it precisely backwards. It was the arrival of the LP that set off the explosion in the number of popular music tracks available to buyers. It also set off a burst of incredible creativity in popular music, as bands, songwriters, and solo performers began to take advantage of the new, extended format, to turn the longer medium to their own artistic purposes. The result was a great flowering not only of wonderful singles, sold as 45s, but of carefully constructed sets of songs, sold as LPs. Was there also a lot of filler? Of course there was. When hasn’t there been?

Weinberger also gets it backwards in suggesting that the LP was a record industry ploy to constrain the supply of products – in order to have “fewer records to make, ship, shelve, categorize, alphabetize, and inventory.” The album format, combined with the single format, brought a huge increase in the number of records – and, in turn, in the outlets that sold them. It unleashed a flood of recorded music. It’s worth remembering that the major competitor to the record during this time was radio, which of course provided music for free. (The arrival of radio nearly killed off the recorded music industry, in fact.) The best way – the only way – for record companies to compete against radio was to increase the number of records they produced, to give customers far more choices than radio could send over the airwaves. The long-playing album, in sum, not only gave buyers many more products to choose from; it gave artists many more options for expressing themselves, to everyone’s benefit. Far from being a constraint on the market, the physical format of the long-player was a great spur to consumer choice and, even more important, to creativity. Who would unbundle Exile on Main Street or Blonde on Blonde or Tonight’s the Night – or, for that matter, Dirty Mind or Youth and Young Manhood or (Come On Feel the) Illinoise? Only a fool would.

And yet it is the wholesale unbundling of LPs into a “miscellaneous pile” of compressed digital song files that Weinberger would have us welcome as some kind of deliverance from decades of apparent servitude to the long-playing album. One doesn’t have to be an apologist for record executives – who in recent years have done a great job in proving their cynicism and stupidity – to recognize that Weinberger is warping history in an attempt to prove an ideological point. Will the new stress on discrete digital tracks bring a new flowering of creativity in music? I don’t know. Maybe we’ll get a pile of gems, or maybe we’ll get a pile of crap. Probably we’ll get a mix. But I do know that the development of the physical long-playing album, together with the physical single, was a development that we should all be grateful for. We probably shouldn’t rush out to dance on the album’s grave.

As for the individual track being the “natural unit of music,” that’s a fantasy. Natural’s not in it.

Not addiction; dependency

This week’s New Yorker features an article, by Julia Ioffe, on Chatroulette, the quirky video chat service that at this point seems mainly of interest to pervs and reporters. Ioffe suggests that, in addition to all the wank artists and show-me-your-tits doofuses, expeditions into “the Chatroulette vortex” also reveal “a lot of joy”:

There is, for example, the video of the dancing banana, crudely drawn on lined paper, exhorting people to “Dance or gtfo!” (Dance or get the fuck out.) The banana’s partners usually respond with wriggling delight.

Well, one gathers one’s joy where one can these days.

Much of Ioffe’s piece is devoted to a profile of Andrey Ternovskiy, the “shy and evasive” Russian teenager who was inspired to invent Chatroulette out of, he claims, a love for “exploring other cultures” that apparently developed during a brief stint selling tchotchkes to tourists in Moscow. “Like much of his generation,” Ioffe writes, “Ternovskiy has an online persona far more developed than his real one.” The young man started skipping school in his early teens, preferring to spend his days at his computer. “The last three years at school, I haven’t done anything,” he tells Ioffe. “I just can’t make myself. There’s so much interesting stuff in the world, and I have to sit there with textbooks?” Ioffe comments:

By “the world,” of course, Ternovskiy means the Internet, which is also where most of his friends are. His closest confidant is a Russian immigrant named Kirill Gura, who lives in Charleston, West Virginia. Every night for the past five years, Ternovskiy has turned on his computer, found Kirill on MSN Messenger, and talked to him until one of them fell asleep. “He’s a real friend,” Ternovskiy says … Ternovskiy says that he sees the computer as “one hundred percent my window into the world.” He doesn’t seek much else. “I always believed that computer might be that thing that I only need, that I only need that thing to survive,” he says. “It might replace everything.”

Ternovskiy’s case is, of course, an extreme one, but it’s also, whether we care to admit or not, representative. The world of the screen hasn’t replaced everything, but, for most of us, whether we’re of Ternovskiy’s generation or not, it has replaced a lot. According to recent media surveys, the average American spends some 8.5 hours a day peering at a screen – TV, computer, or cell phone – and that number continues to rise as smartphone use explodes. We’ve reached a point, in other words, where it’s more likely than not that we’re looking into a screen at any given moment when we’re awake.

Last month, the University of Maryland’s International Center for Media & the Public Agenda released the results of an informal study of college students’ attitudes toward media. Two hundred students at the school were asked to refrain from using any electronic media for a day and to write about their experiences. The students, the researchers reported, “use literal terms of addiction to characterize their dependence on media.” By using the a-word – “addiction” – the researchers assured themselves of a burst of media attention. (If there’s one thing we’re addicted to these days, it’s the word “addiction.”) “College students are ‘addicted’ to social media and even experience withdrawal symptoms from it,” ran a typical headline. “According to a new study out of the University of Maryland, students are addicted to social media, and computers and smartphones deliver their drug,” began a story at the Huffington Post. Predictably, the overheated reports were quickly countered by a flood of counter-reports pointing out the silliness of confusing the language of addiction with addiction itself.

The use of the addiction metaphor gave everybody an easy way to discuss, and dismiss, the study without actually looking at the study’s results, which provided a fascinating look at how we live today. Here’s a brief, representative sampling of how students described the experience of going without their devices for just a few hours:

“Texting and IMing my friends gives me a constant feeling of comfort. When I did not have those two luxuries, I felt quite alone and secluded from my life. Although I go to a school with thousands of students, the fact that I was not able to communicate with anyone via technology was almost unbearable.“

“Not having a cell phone created a logistical problem. It was manageable for one day, but I cannot see how life would be possible without one.”

“My attempt at the gym without the ear pieces in my iPhone wasn’t the same; doing cardio listening to yourself breath really drains your stamina.”

“It is almost second nature to check my Facebook or email; it was very hard for my mind to tell my body not to go on the Internet.”

“I began to compare my amount of media usage to that of my friends. I realized that I don’t usually check or update Facebook or Twitter like a lot of my friends that have Blackberrys or iPhones. I did however realize that as soon as I get home from class it has become a natural instinct to grab my computer (not to do school work which is the sole reason my parents got me my computer!) but to check my email, Gmail, umd account mail, Facebook account, Twitter account, Skype, AIM, and ELMS: that’s six websites and four social networking sites. This in itself is a wake-up call! I was so surprised to think that I probably spend at least 1-2 hours on these sites alone BEFORE I even make it to attempting my homework and then continue checking these websites while doing my school work.”

“With classes, location, and other commitments it’s hard to meet with friends and have a conversation. Instant messaging, SMS, and Facebook are all ways to make those connections with convenience, and even a heightened sense of openness. I believe that people are more honest about how they really feel through these media sources because they are not subject to nonverbal signals like in face to face communication.”

“When I was walking to class I always text and listen to my iPod so the walk to class felt extremely long and boring unlike all the other times.”

“My short attention span prevented me from accomplishing much, so I stared at the wall for a little bit. After doing some push-ups, I just decided to take a few Dramamine and go to sleep to put me out of my misery.”

“On a psychological note, my brain periodically went crazy because I found at times that I was so bored I didn’t know what to do with myself.”

“I clearly am addicted and the dependency is sickening. I feel like most people these days are in a similar situation, for between having a Blackberry, a laptop, a television, and an iPod, people have become unable to shed their media skin.”

“The day seemed so much longer and it felt like we were trying to fill it up with things to do as opposed to running out of time to do all of the things we wanted to do.”

“I couldn’t take it anymore being in my room…alone…with nothing to occupy my mind so I gave up shortly after 5pm. I think I had a good run for about 19 hours and even that was torture.”

“Honestly, this experience was probably the single worst experience I have ever had.”

And so on.

The problem with the addiction metaphor, which as these quotes show is easy to indulge in, is that it presents the normal as abnormal and hence makes it easy for us to distance ourselves from our own behavior and its consequences. By dismissing talk of “Internet addiction” as rhetorical overkill, which it is, we also avoid undertaking an honest examination of how deeply our media devices have been woven into our lives and how they are shaping those lives in far-reaching ways, for better and for worse. In the course of just a decade, we have become profoundly dependent on a new and increasingly pervasive technology.

There’s nothing unusual about this. We routinely become dependent on popular, useful technologies. If people were required to live without their cars or their indoor plumbing for a day, many of them would probably resort to the language of addiction to describe their predicament. I know that, after a few hours, I’d be seriously jonesing for that toilet. What’s important is to be able to see what’s happening as we adapt to a new technology – and the problem with the addiction metaphor is that it makes it too easy to avert our eyes.

The addiction metaphor also distorts the nature of technological change by suggesting that our use of a technology stems from a purely personal choice – like the choice to smoke or to drink. An inability to control that choice becomes, in this view, simply a personal failing. But while it’s true that, in the end, we’re all responsible for how we spend our time, it’s an oversimplification to argue that we’re free “to choose” whether and how we use computers and cell phones, as if social norms, job expectations, familial responsibilities, and other external pressures had nothing to do with it. The deeper a technology is woven into the patterns of everyday life, the less choice we have about whether and how we use that technology.

When it comes to the digital networks that now surround us, the fact is that most us can’t just GTFO, even if we wanted to. The sooner we move beyond the addiction metaphor, the sooner we’ll be able to see, with some clarity and honesty, the extent and implications of our dependency on our networked computing and media devices. What happens to the human self as it comes to experience more and more of the world, and of life, through the mediation of the screen?

At the end of Ioffe’s piece, she reports on a recent trip that Tournovskiy made to West Virigina to meet his IM buddy and “real friend,” Kirill Gura, face to face: “‘It was a little weird, you know,’ Ternovskiy told me later. ‘We was just looking at each other without having much to say.'” At this point, there’s probably a little Ternovskiy in all of us.

My own private internet

Here’s Yahoo CEO Carol Bartz, in a new Esquire interview, describing her vision of the future of the Net:

I call it the Internet of One. I want it to be mine, and I don’t want to work too hard to get what I need. In a way, I want it to be HAL. I want it to learn about me, to be me, and cull through the massive amount of information that’s out there to find exactly what I want.

Cool. Going online would feel like being isolated in one of those comfy suspended-animation capsules where HAL kept the crew members in 2001:

2001.jpg

That turned out well, as I recall.

Sunday rambles

The editors of n+1 examine the rise of “webism” and some of its paradoxes:

The webists met the [New York] Times’s schizophrenia with a schizophrenia of their own. The worst of them simply cheered the almost unbelievably rapid collapse of the old media, which turned out, for all its seeming influence and power, to be a paper tiger, held up by elderly white men. But the best of them were given pause: themselves educated by newspapers, magazines, and books, they did not wish for these things to disappear entirely. (For one thing, who would publish their books?) In fact, with the rise of web 2.0 and the agony of the print media, a profound contradiction came into view. Webism was born as a technophilic left-wing splinter movement in the late 1960s, and reborn in early ’80s entrepreneurial Silicon Valley, and finally fully realized by the generation born around 1980. Whether in its right-leaning libertarian or left-leaning communitarian mode it was against the Man, and all the minions of the Man: censorship, outside control, narrative linearity. It was against elitism; it was against inequality. But it wasn’t against culture. It wasn’t against books! An Apple computer—why, you could write a book with one of those things. (Even if they were increasingly shaped and designed mostly so you could watch a movie.) One of the mysteries of webism has always been what exactly it wanted …

In The American Scholar, Sven Birkerts thinks about technological change and the future of imagination and the creative mind:

From the vantage point of hindsight, that which came before so often looks quaint, at least with respect to technology. Indeed, we have a hard time imagining that the users weren’t at some level aware of the absurdity of what they were doing. Movies bring this recognition to us fondly; they give us the evidence. The switchboard operators crisscrossing the wires into the right slots; Dad settling into his luxury automobile, all fins and chrome; Junior ringing the bell on his bike as he heads off on his paper route. The marvel is that all of them—all of us—concealed their embarrassment so well. The attitude of the present to the past . . . well, it depends on who is looking. The older you are, the more likely it is that your regard will be benign—indulgent, even nostalgic. Youth, by contrast, quickly gets derisive, preening itself on knowing better, oblivious to the fact that its toys will be found no less preposterous by the next wave of the young.

In the Times Magazine, Gary Wolf speculates that obsessive self-monitoring may be moving out of the fringe and into the mainstream:

Ubiquitous self-tracking is a dream of engineers. For all their expertise at figuring out how things work, technical people are often painfully aware how much of human behavior is a mystery. People do things for unfathomable reasons. They are opaque even to themselves. A hundred years ago, a bold researcher fascinated by the riddle of human personality might have grabbed onto new psychoanalytic concepts like repression and the unconscious. These ideas were invented by people who loved language. Even as therapeutic concepts of the self spread widely in simplified, easily accessible form, they retained something of the prolix, literary humanism of their inventors. From the languor of the analyst’s couch to the chatty inquisitiveness of a self-help questionnaire, the dominant forms of self-exploration assume that the road to knowledge lies through words. Trackers are exploring an alternate route. Instead of interrogating their inner worlds through talking and writing, they are using numbers. They are constructing a quantified self.

Placing the spreadsheeting-of-the-self trend in the context of the social-networking trend, Wolf observes, “You might not always have something to say, but you always have a number to report.” To give it a different spin: Who needs imagination when you have the data?

Realtimesink

James Sturm, the cartoonist, continues to post about his experience going cold turkey from the Internet. (He writes up his accounts and has someone else send them in to Slate for publication.) “Cutting myself off from the Internet hasn’t been easy,” he confesses in his latest missive from the offline world. “The Web had burrowed deeper into my domestic life than I’d realized.” But his isolation from the Net’s realtime stream of distractions has brought a burst of creativity and productivity:

One benefit of being offline so far is that I am drawing a lot more than I was before. I knew committing to do this column would force me to produce, but I am heartened by how seamlessly my time spent connected to the Internet has become time spent drawing. In the last two weeks, I’ve already filled up a 40-page 4″x6″ photo album (I purchase these in 99-cent stores) with watercolor paintings. This work seems to foster patience (I literally have to wait for the paint to dry), whereas on the Web, I was a hyperactive child with zero attention span.

There’s been much written about how the Web provides new opportunities for people to express themselves. That’s true, and welcome. But the Web is also an enormous global timesink, sucking up massive amounts of time that might have gone into more productive, thoughtful, and fulfilling activities. It’s difficult to measure the cost of this wasted time, because it’s impossible to know what people might have done if they weren’t surfing and tweeting and youtubing and huluing and foursquaring and emailing and IMing and googling and etc. The Web often gives us the illusion of having an incredibly diverse set of pursuits when it’s really narrowing the scope of our thoughts and activities. There is still a whole lot more that people can do offline than online – something that’s easy to forget as we peer into our screens all day.

James Sturm’s experience should give us all pause. What might we be accomplishing if we weren’t tethered to the Net?

Cold off the press

On Saturday, the UPS guy showed up with a printed, bound, and jacketed copy of my book The Shallows. It’s exciting and gratifying, of course, to receive a finished copy of a book that’s been in the works for a couple of years, but it’s also scarifying. No more edits, corrections, updates, rethinks: the ink is indelible. The phrase shouldn’t be “hot of the press” – hot things tend to be malleable – but rather “cold off the press.”

Oh, well. It’s now the reader’s book, not the writer’s.

One thing I have no mixed feelings about is the set of endorsements that the publisher has gathered from early readers of the book. They’re all from writers and thinkers I admire, and – shucks – here they are:

“Neither a tub-thumpingly alarmist jeremiad nor a breathlessly Panglossian ode to the digital self, Nicholas Carr’s The Shallows is a deeply thoughtful, surprising exploration of our ‘frenzied’ psyches in the age of the Internet. Whether you do it in pixels or pages, read this book.” —Tom Vanderbilt, author of Traffic: Why We Drive the Way We Do (and What It Says About Us)

“Nicholas Carr carefully examines the most important topic in contemporary culture — the mental and social transformation created by our new electronic environment. Without ever losing sight of the larger questions at stake, he calmly demolishes the clichés that have dominated discussions about the Internet. Witty, ambitious, and immensely readable, The Shallows actually manages to describe the weird, new, artificial world in which we now live.” —Dana Gioia, poet and former Chairman of the National Endowment for the Arts

“Nicholas Carr has written an important and timely book. See if you can stay off the web long enough to read it!” —Elizabeth Kolbert, author of Field Notes from a Catastrophe: Man, Nature, and Climate Change

“The core of education is this: developing the capacity to concentrate. The fruits of this capacity we call civilization. But all that is finished, perhaps. Welcome to the shallows, where the un-educating of homo sapiens begins. Nicholas Carr does a wonderful job synthesizing the recent cognitive research. In doing so, he gently refutes the ideologists of progress, and shows what is really at stake in the daily habits of our wired lives: the re-constitution of our minds. What emerges for the reader, inexorably, is the suspicion that we have well and truly screwed ourselves.” —Matthew B. Crawford, author of Shop Class As Soulcraft

“Ultimately, The Shallows is a book about the preservation of the human capacity for contemplation and wisdom, in an epoch where both appear increasingly threatened. Nick Carr provides a thought-provoking and intellectually courageous account of how the medium of the Internet is changing the way we think now and how future generations will or will not think. Few works could be more important.” —Maryanne Wolf, director of the Tufts University Center for Reading and Language Research and author of Proust and the Squid: The Story and Science of the Reading Brain

The book comes out June 7.