Monthly Archives: July 2011

God, Kevin Kelly and the myth of choices

I suspect it’s accurate to say that Kevin Kelly’s deep Christian faith makes him something of an outlier among the Bay Area tech set. It also adds some interesting layers and twists to his often brilliant thinking about technology, requiring him to wrestle with ambiguities and tensions that most in his cohort are blind to. In a new interview with Christianity Today, Kelly explains the essence of what the magazine refers to as his “geek theology”:

We are here to surprise God. God could make everything, but instead he says, “I bestow upon you the gift of free will so that you can participate in making this world. I could make everything, but I am going to give you some spark of my genius. Surprise me with something truly good and beautiful.” So we invent things, and God says, “Oh my gosh, that was so cool! I could have thought of that, but they thought of that instead.”

I confess I have a little trouble imagining God saying something like “Oh my gosh, that was so cool!” It makes me think that Kelly’s God must look like Jeff Spicoli:

spicoli.jpg

But beyond the curious lingo, Kelly’s attempt to square Christianity with the materialist thrust of technological progress is compelling – and moving. If you’re going to have a geek theology, it seems wise to begin with a sense of the divinity of the act of making. In creating technology, then, we are elaborating, extending creation itself – carrying on God’s work, in Kelly’s view. Kelly goes on to offer what he terms “a technological metaphor for Jesus,” which stems from his experience watching computer game-makers create immersive virtual worlds and then enter the worlds they’ve created:

I had this vision of the unbounded God binding himself to his creation. When we make these virtual worlds in the future — worlds whose virtual beings will have autonomy to commit evil, murder, hurt, and destroy options — it’s not unthinkable that the game creator would go in to try to fix the world from the inside. That’s the story of Jesus’ redemption to me. We have an unbounded God who enters this world in the same way that you would go into virtual reality and bind yourself to a limited being and try to redeem the actions of the other beings since they are your creations … For some technological people, that makes [my] faith a little more understandable.

Kelly’s personal relationship to technology is complex. He may be a technophile in the abstract – a geek in the religious sense – but in his own life he takes a wary, skeptical view of new gadgets and other tools, resisting rather than giving in to their enchantments in order to protect his own integrity. Inspired by the example of the Amish, he is a technological minimalist: “I seek to find those technologies that assist me in my mission to express love and reflect God in the world, and then disregard the rest.” One senses here that Kelly is most interested in technological progress as a source of metaphor, a means of probing the mystery of existence. The interest is, oddly enough, a fundamentally literary one.

The danger with metaphor is that, like technology, it can be awfully seductive; it can skew one’s view of reality. In the interview, as in his recent, sweeping book, What Technology Wants, Kelly argues that technological progress is a force for good in the world, a force of “love,” because it serves to expand the choices available to human beings, to give people more “opportunities to express their unique set of God-given gifts.” Kelly therefore believes, despite his wariness about the effects of technology on his own life, that he has a moral duty to promote rapid technological innovation. If technology is love, then, by definition, the more of it, the better:

I want to increase all the things that help people discover and use their talents. Can you imagine a world where Mozart did not have access to a piano? I want to promote the invention of things that have not been invented yet, with a sense of urgency, because there are young people born today who are waiting upon us to invent their aids. There are Mozarts of this generation whose genius will be hidden until we invent their equivalent of a piano — maybe a holodeck or something. Just as you and I have benefited from the people who invented the alphabet, books, printing, and the Internet, we are obligated to materialize as many inventions as possible, to hurry, so that every person born and to-be-born will have a great chance of discovering and sharing their godly gifts.

There is a profound flaw in this view of progress. While I think that Kelly could make a strong case that technological progress increases the number of choices available to people in general, he goes beyond that to suggest that the process is continuously additive. Progress gives and never takes away. Each new technology means more choices for people. But that’s not true. When it comes to choices, technological progress both gives and takes away. It closes some possibilities even as it opens others. You can’t assume that, for any given child, technological advance will increase the likelihood that she will fulfill her natural potential – or, in Kelly’s words, discover and share her unique godly gifts. It may well reduce that likelihood.

The fallacy in Kelly’s thinking becomes quickly apparent if you look closely at his Mozart example (which he also uses in his book). The fact that Mozart was born after the invention of the piano and that the piano was essential to Mozart’s ability to fulfill his potential is evidence, according to Kelly’s logic, of the beneficence of progress. But while it’s true that if Mozart had been born 300 years earlier, the less advanced state of technological progress may have prevented him from fulfilling his potential, it’s equally true that if he had been born 300 years later, the more advanced state of technological progress would have equally prevented him from achieving his potential. It’s absurd to believe that if Mozart were living today, he would create the great works he created in the eighteenth century – the symphonies, the operas, the concertos. Technological progress has transformed the world, and turned it into a world that is less suited to an artist of Mozart’s talents.

Genius emerges at the intersection of unique individual human potential and unique temporal circumstances. As circumstances change, some people’s ability to fulfill their potential will increase, but other people’s will decrease. Progress does not simply expand options. It changes options, and along the way options are lost as well as gained. Homer lived in a world that we would call technologically primitive, yet he created immortal epic poems. If Homer were born today, he would not be able to compose those poems in his head. That possibility has been foreclosed by progress. For all we know, if Homer (or Mozart) were born today, he would end up being be an advertising copywriter, and perhaps not even a very good one.

Look at any baby born today, and try to say whether that child would have a greater possibility of fulfilling its human potential if during its lifetime (a) technological progress reversed, (b) technological progress stalled, (c) technological progress advanced slowly, or (d) technological progress accelerated quickly. You can’t. Because it’s unknowable.

The best you can argue, therefore, is that technological progress will, on balance, have a tendency to open more choices for more people. But that’s not a moral argument about the benefits of progress; it’s a practical argument, an argument based on calculations of utility. If, at the individual level, new technology may actual prevent people from discovering and sharing their “godly gifts,” then technology is not itself godly. Why would God thwart His own purposes? Technological progress is not a force of cosmic goodness, and it is surely not a force of cosmic love. It’s an entirely earthly force, as suspect as the flawed humans whose purposes it suits. Kelly’s belief that we are morally obligated “to materialize as many inventions as possible” and “to hurry” in doing so is not only based on a misperception; it’s foolhardy and dangerous.

McLuhan at 100

This week — Thursday, July 21, to be precise — marks the 100th anniversary of Marshall McLuhan’s birth. Here are some thoughts on the man and his legacy.

One of my favorite YouTube videos is a clip from a 1968 Canadian TV show featuring a debate between Norman Mailer and Marshall McLuhan. The two men, both icons of the sixties, could hardly be more different. Leaning forward in his chair, Mailer is pugnacious, animated, engaged. McLuhan, abstracted and smiling wanly, seems to be on autopilot. He speaks in canned riddles. “The planet is no longer nature,” he declares, to Mailer’s uncomprehending stare; “it’s now the content of an art work.”

Watching McLuhan, you can’t quite decide whether he was a genius or just had a screw loose. Both impressions, it turns out, are valid. As the novelist Douglas Coupland argued in his recent biography, Marshall McLuhan: You Know Nothing of My Work!, McLuhan’s mind was probably situated at the mild end of the autism spectrum. He also suffered from a couple of major cerebral traumas. In 1960, he had a stroke so severe that he was given his last rites. In 1967, just a few months before the Mailer debate, surgeons removed a tumor the size of a small apple from the base of his brain. A later procedure revealed that McLuhan had an extra artery pumping blood into his cranium.

Between the stroke and the tumor, McLuhan managed to write a pair of extravagantly original books. The Gutenberg Galaxy, published in 1962, explored the cultural and personal consequences of the invention of the printing press, arguing that Gutenberg’s invention shaped the modern mind. Two years later, Understanding Media extended the analysis to the electric media of the twentieth century, which, McLuhan argued, were destroying the individualist ethic of print culture and turning the world into a tightly networked global village. The ideas in both books drew heavily on the works of other thinkers, including such contemporaries as Harold Innis, Albert Lord, and Wyndham Lewis, but McLuhan’s synthesis was, in content and tone, unlike anything that had come before.

When you read McLuhan today, you find all sorts of reasons to be impressed by his insight into media’s far-reaching effects and by his anticipation of the course of technological progress. When he looked at a Xerox machine in 1966, he didn’t just see the ramifications of cheap photocopying, as great as they were. He foresaw the transformation of the book from a manufactured object into an information service: “Instead of the book as a fixed package of repeatable and uniform character suited to the market with pricing, the book is increasingly taking on the character of a service, an information service, and the book as an information service is tailor-made and custom-built.” That must have sounded outrageous a half century ago. Today, with books shedding their physical skins and turning into software programs, it sounds like a given.

You also realize that McLuhan got a whole lot wrong. One of his central assumptions was that electric communication technologies would displace the phonetic alphabet from the center of culture, a process that he felt was well under way in his own lifetime. “Our Western values, built on the written word, have already been considerably affected by the electric media of telephone, radio, and TV,” he wrote in Understanding Media. He believed that readers, because their attention is consumed by the act of interpreting the visual symbols of alphabetic letters, become alienated from their other senses, sacrifice their attachment to other people, and enter a world of abstraction, individualism, and rigorously linear thinking. This, for McLuhan, was the story of Western civilization, particularly after the arrival of Gutenberg’s press.

By freeing us from our single-minded focus on the written word, new technologies like the telephone and the television would, he argued, broaden our sensory and emotional engagement with the world and with others. We would become more integrated, more “holistic,” at both a sensory and a social level, and we would recoup some of our primal nature. But McLuhan failed to anticipate that, as the speed and capacity of communication networks grew, what they would end up transmitting more than anything else is text. The written word would invade electric media. If McLuhan were to come back to life today, the sight of people using their telephones as reading and writing devices would blow his mind. He would also be amazed to discover that the fuzzy, low-definition TV screens that he knew (and on which he based his famous distinction between hot and cold media) have been replaced by crystal-clear, high-definition monitors, which more often that not are crawling with the letters of the alphabet. Our senses are more dominated by the need to maintain a strong, narrow visual focus than ever before. Electric media are social media, but they are also media of isolation. If the medium is the message, then the message of electric media has turned out to be far different from what McLuhan supposed.

 

Of course, the fact that some of his ideas didn’t pan out wouldn’t have bothered McLuhan much. He was far more interested in playing with ideas than nailing them down. He intended his writings to be “probes” into the present and the future. He wanted his words to knock readers out of their intellectual comfort zones, to get them to entertain the possibility that their accepted patterns of perception might need reordering. Fortunately for him, he arrived on the scene at a rare moment in history when large numbers of people wanted nothing more than to have their minds messed with.

McLuhan was a scholar of literature, with a doctorate from Cambridge, and his interpretation of the intellectual and social effects of media was richly allusive and erudite. But what particularly galvanized the public and the press was the weirdness of his prose. Perhaps a consequence of his unusual mind, he had a knack for writing sentences that sounded at once clinical and mystical. His books read like accounts of acid trips written by a bureaucrat. That kaleidoscopic, almost psychedelic style made him a darling of the counterculture — the bearded and the Birkenstocked embraced him as a guru — but it alienated him from his colleagues in academia. To them, McLuhan was a celebrity-seeking charlatan.

Neither his fans nor his foes saw him clearly. The central fact of McLuhan’s life was his conversion, at the age of twenty-five, to Catholicism, and his subsequent devotion to the religion’s rituals and tenets. He became a daily Mass-goer. Though he never discussed it, his faith forms the moral and intellectual backdrop to all his mature work. What lay in store, McLuhan believed, was the timelessness of eternity. The earthly conceptions of past, present, and future were by comparison of little consequence. His role as a thinker was not to celebrate or denigrate the world but simply to understand it, to recognize the patterns that would unlock history’s secrets and thus provide hints of God’s design. His job was not dissimilar, as he saw it, from that of the artist.

That’s not to say that McLuhan was without secular ambition. Coming of age at the dawn of mass media, he very much wanted to be famous. “I have no affection for the world,” he wrote to his brother in the late thirties, at the start of his academic career. But in the same letter he disclosed the “large dreams” he harbored for “the bedazzlement of men.” Modern media needed its own medium, the voice that would explain its transformative power to the world, and he would be it.

The tension between McLuhan’s craving for earthly attention and his distaste for the material world would never be resolved. Even as he came to be worshipped as a techno-utopian seer in the mid-sixties, he had already, writes Coupland, lost all hope “that the world might become a better place with new technology.” He heralded the global village, and was genuinely excited by its imminence and its possibilities, but he also saw its arrival as the death knell for the literary culture he revered. The electronically connected society would be the setting not for the further flourishing of civilization but for the return of tribalism, if on a vast new scale. “And as our senses [go] outside us,” he wrote, “Big Brother goes inside.” Always on display, always broadcasting, always watched, we would become mediated, technologically and socially, as never before. The intellectual detachment that characterizes the solitary thinker — and that was the hallmark of McLuhan’s own work — would be replaced by the communal excitements, and constraints, of what we have today come to call “interactivity.”

massage.jpg

McLuhan also saw, with biting clarity, how all mass media are fated to become tools of commercialism and consumerism — and hence instruments of control. The more intimately we weave media into our lives, the more tightly we become locked in a corporate embrace: “Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.” Has a darker vision of modern media ever been expressed?

“Many people seem to think that if you talk about something recent, you’re in favor of it,” McLuhan explained during an uncharacteristically candid interview in 1966. “The exact opposite is true in my case. Anything I talk about is almost certain to be something I’m resolutely against, and it seems to me the best way of opposing it is to understand it, and then you know where to turn off the button.” Though the founders of Wired magazine would posthumously appoint McLuhan as the “patron saint” of the digital revolution, the real McLuhan was as much a Luddite as a technophile. He would have found the collective banality of Facebook abhorrent, if also fascinating.

In the fall of 1979, McLuhan suffered another major stroke, but this was one from which he would not recover. Though he regained consciousness, he remained unable to read, write, or speak until his death a little more than a year later. A lover of words — his favorite book was Joyce’s Finnegans Wake — he died in a state of wordlessness. He had fulfilled his own prophecy and become post-literary.

This post, along with seventy-eight others, is collected in the book Utopia Is Creepy.

Whither journalism: round two

My debate on the net’s effect on journalism with Jay Rosen has entered the second, rebuttal round over at the Economist’s site.

Here’s my rebuttal:

Jay Rosen grants that the internet has left us with “a weaker eye on power” while increasing “the supply of rubbish in and around journalism”. As a counterweight, he gives us ten reasons to be cheerful about journalism, most of which revolve around the “democratisation” of media. (I will resist the urge to point out how appropriate it is to provide a defence of the net’s effects on journalism in the form of a Top Ten list.)

I join Mr Rosen in applauding the way the net has reduced barriers to media participation. Having written a blog for many years, I can testify to the benefits of cheap digital publishing. But I do not take on faith the idea that democratising media necessarily improves journalism, and, unfortunately, Mr Rosen provides little in the way of facts to support his case. In place of hard evidence, we get dubious generalisations (“journalists are stronger and smarter when they are involved in the struggle for their own sustainability”), gauzy platitudes (“new life flows in through this opening”) and speculations (“data journalism is a huge opportunity”).

One of Mr Rosen’s most important claims crumbles when subjected to close scrutiny. He notes, correctly, that the net has dissolved the old geographic boundaries around news markets, making it easy for people to find stories from a variety of sources. But he then suggests that the effect, on the production side, has been to reduce redundant reporting, leading to less “pack journalism” and “a saner division of labour”. That would be nice if it were true, but it is not.

Much of what has been lost through the internet-driven winnowing of reporting staff is not duplicative effort but reporting in areas that were thinly covered to begin with: local and state governments, federal agencies, foreign affairs and investigative journalism. Having a strong, stable corps of reporters digging into these areas is crucial to having a well-informed citizenry, but since these forms of journalism tend to be expensive to produce and unattractive to online advertisers, they have suffered the heaviest cuts.

As Mr Rosen admits, coverage of state governments in America has eroded significantly. The number of journalists stationed in state capitols fell by a third between 2003 and 2009, creating big gaps in oversight. “In today’s capitol pressrooms,” American Journalism Review reports, “triage and narrowed priorities are the orders of the day.” The situation is similar with federal agencies in Washington, according to another AJR study. Between 2003 and 2010, the number of reporters at the Defence Department fell from 23 to 10; at the State Department from 15 to 9; at the Treasury Department from 12 to 6. “The watchdogs have abandoned their posts,” concludes the study, and “the quality of the reporting on the federal government has slipped.”

Foreign reporting, which is particularly expensive, has also suffered deep cuts. Over the past decade, nearly 20 American newspapers closed their foreign bureaus, and many others fired foreign correspondents. In Britain, daily newspapers have significantly curtailed their overseas reporting, according to a 2010 study by the Media Standards Trust, and alternative online sources are not taking up the slack. Research indicates that “the public do not seek out foreign news online”, according to the study. As foreign news is drained from the popular press, it becomes ever more the preserve of an elite.

If lone-wolf reporting is suffering in the web era, pack journalism is thriving, as evidenced by the swarming coverage of the Casey Anthony trial and the Anthony Weiner scandal. “The new paradox of journalism is more outlets covering fewer stories,” notes the Pew Project for Excellence in Journalism. What we are discovering is that in a world where advertisers pay by the click and readers read by the click, editorial attention and resources tend to become more concentrated and more keyed to spectacles. The upshot, contrary to Mr Rosen’s rosy assumption, is a division of journalistic labour that is even less sane than it used to be. Gadget blogs and gossip sites boom, while government beats go untrodden.

Despite the many experiments in online journalism, we have not found a substitute for the cross-subsidies that allowed newspapers to use the profits from popular features to pay for broad, in-depth reporting. The cross-subsidisation may have looked inefficient to economists, but as Clay Shirky, a media scholar, recently put it, “at least it worked”. Thanks to the net, it does not work any more.

It is easy to get caught up in the whirlwind of information that blows in great gusts through the internet. But we should remember that the primary function of journalism always has been and always will be the hard, skilled work of reporting the news. The subsequent sharing, tweeting, tagging, ranking, remixing and (yes) debating of the news are all important, but they are secondary functions—and, indeed, entirely dependent on primary reporting. Unless Mr Rosen can wave a magic wand and repair the damage that the internet has done to reporting and reporters, his argument that the net has improved journalism will remain an exercise in grasping at straws.

The third and final round comes next week.

Minds like sieves

“As gravity holds matter from flying off into space, so memory gives stability to knowledge; it is the cohesion which keeps things from falling into a lump, or flowing in waves.” -Emerson

There’s a fascinating – and, to me, disquieting – study on the internet’s effects on memory that’s just come out in Science.* It provides more evidence of how quickly and flexibly our minds adapt to the tools we use to think with, for better or for worse.

The study, “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips,” was conducted by three psychologists: Betsy Sparrow, of Columbia University; Jenny Liu, of the University of Wisconsin at Madison; and Daniel Wegner, of Harvard. They conducted a series of four experiments aimed at answering this question: Does our awareness of our ability to use Google to quickly find any fact or other bit of information influence the way our brains form memories? The answer, they discovered, is yes: “when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it.” The findings suggest, the researchers write, “that processes of human memory are adapting to the advent of new computing and communication technology.”

In the first experiment, people were asked a series of trivia questions. They were then given a test in which they were shown different corporate brand names, some from search engines (eg, Google) and some from other familiar companies (eg, Nike), in different colors and asked to identify the color. In this kind of test, called a Stroop task, a greater delay in naming the color indicates a greater interest in, and cognitive focus on, the word itself. As the researchers explain: “People who have been disposed to think about a certain topic typically show slowed reaction times for naming the color of the word when the word itself is of interest and is more [cognitively] accessible, because the word captures attention and interferes with the fastest possible color naming.” The experiment revealed that after people are asked a question to which they don’t know the answer, they take significantly longer to identify the color of a search-related brand name than a non-search-related one. The upshot: “It seems that when we are faced with a gap in our knowledge, we are primed to turn to the computer to rectify the situation.” There was even a delay, though a lesser one, in identifying the color of an internet brand name when people had been asked questions that they did know the answer to, suggesting that “the computer may be primed when the concept of knowledge in general is activated.” In other words, we seem to have trained our brains to immediately think of using a computer when we’re called on to answer a question or otherwise provide some bit of knowledge.

In the second experiment, people read forty factual statements of the kind you’d tend to look up with a search engine (eg, “an ostrich’s eye is bigger than its brain”) and then typed the statements into a computer. Half the participants were told the computer would save what they typed, and half were told that what they typed would be erased. Afterwards, the participants were asked to write down as many of the statements as they could remember. The experiment revealed that people who believed the information would be stored in the computer had a weaker memory of the information than those who assumed that the information would not be available in the computer. The researchers conclude: “Participants apparently did not make the effort to remember when they thought they could later look up the trivia statements they had read. Since search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”

The third experiment was a variation on the second, which again showed that people were less likely to remember a fact if they believed they would be able to find it on a computer and more likely to remember it if they believed it would not be available on a computer. The experiment further revealed that when people were asked whether a fact had been saved or erased, they displayed a better recall for the act of saving than erasing. “Thus,” the researchers explain, “it appears that believing that one won’t have access to the information in the future enhances memory for the information itself, whereas believing the information was saved externally enhances memory for the fact that the information could be accessed, at least in general.”

In the fourth experiment, people again read a series of factual statements and typed them into a computer. They were told that the statements would be stored in a specific folder with a generic name (eg, “facts” or “data”). They were then given ten minutes to write down as many statements as they could remember. Finally, they were asked to name the folder in which a particular statement was stored (eg, “What folder was the statement about the ostrich saved in?”). It was discovered that people were better able to remember the folder names than the facts themselves. “These results seem remarkable on the surface, given the memorable nature of the statements and the unmemorable nature of the folder names,” the researchers write. The experiment provides “preliminary evidence that when people expect information to remain continuously available (such as we expect with Internet access), we are more likely to remember where to find it than we are to remember the details of the item.”

Human beings, of course, have always had external, or “transactive,” information stores to supplement their biological memory. These stores can reside in the brains of other people we know (if your friend John is an expert on sports, then you know you can use John’s knowledge of sports facts to supplement your own memory) or in storage or media technologies such as maps and books and microfilm. But we’ve never had an “external memory” so capacious, so available and so easily searched as the web. If, as this study suggests, the way we form (or fail to form) memories is deeply influenced by the mere existence of external information stores, then we may be entering an era in history in which we will store fewer and fewer memories inside our own brains.

If a fact stored externally were the same as a memory of that fact stored in our mind, then the loss of internal memory wouldn’t much matter. But external storage and biological memory are not the same thing. When we form, or “consolidate,” a personal memory, we also form associations between that memory and other memories that are unique to ourselves and also indispensable to the development of deep, conceptual knowledge. The associations, moreover, continue to change with time, as we learn more and experience more. As Emerson understood, the essence of personal memory is not the discrete facts or experiences we store in our mind but “the cohesion” which ties all those facts and experiences together. What is the self but the unique pattern of that cohesion?

The researchers seem fairly sanguine about the results of their study. “We are becoming symbiotic with our computer tools,” they conclude, “growing into interconnected systems that remember less by knowing information than by knowing where the information can be found.” Although we don’t yet understand the possible “disadvantages of being constantly ‘wired,'” we have nevertheless “become dependent” on our gadgets. “We must remain plugged in to know what Google knows.” But as memory shifts from the individual mind to the machine’s shared database, what happens to that unique “cohesion” that is the self?

The see-through world (revisited)

Rough Type’s summer retro blitz continues with the recycling of this post, originally published on January 31, 2008.

As GPS transceivers become common accessories in cars, the benefits have been manifold. Millions of us have been relieved of the nuisance of getting lost or, even worse, the shame of having to ask a passerby for directions.

But, as with all popular technologies, those dashboard maps are having some unintended consequences. In many cases, the shortest route between two points turns out to run through once-quiet neighborhoods and formerly out-of-the-way hamlets.

Scores of villages have been overrun by cars and lorries whose drivers robotically follow the instructions dispensed by their satellite navigation systems. The International Herald Tribune reports that the parish council of Barrow Gurney in southwestern England has even requested, fruitlessly, that the town be erased from the maps used by the makers of navigation devices.

A research group in the Netherlands last month issued a study documenting the phenomenon and the resulting risk of accidents. It went so far as to say that GPS systems can turn drivers into “kid killers.”

Now, a new generation of sat-nav devices is on the horizon. They’ll be connected directly to the internet, providing drivers with a steady stream of real-time information about traffic congestion, accidents, and road construction. The debut of one of the new systems, called Dash Express, at this month’s Consumer Electronics Show in Las Vegas led to claims that the new technology might “spell the end of traffic jams forever.”

That would be nice, but I have my doubts. When we all have equally precise, equally up-to-the-second information on traffic conditions, the odds are that we’ll all respond in similar ways. As we all act in unison to avoid one bottleneck, we’ll just create a new bottleneck. We may come to look back fondly on the days when information was less uniformly distributed.

That’s the problem with the so-called transparency that’s resulting from instantly available digital information. When we all know what everyone else knows, it becomes ever harder to escape the pack.

Just ask the hardcore surfers who dedicate themselves to finding the best waves. It used to be that they could keep their favorite beaches secret, riding their boards in relative solitude. But in recent months people have begun putting up dozens of video cameras, known as “surf cams,” along remote shorelines and streaming the video over the net.

Thanks to the cameras, once secluded waters are now crowded with hordes of novice surfers. That’s led to an outbreak of “surf cam rage,” according to a report last weekend in the New York Times. Die-hard surfers are smashing any cameras they find in the hope that they might be able to turn the tide of transparency.

But the vandalism is in vain. For every surf cam broken, a few more go up in its place.

There is, of course, much to be said for the easy access to information that the internet is allowing. Information that was once reserved for the rich, the well-connected, and the powerful is becoming accessible to all. That helps level the playing field, spreading economic and social opportunities more widely and fairly.

At the same time, though, transparency is erasing the advantages that once went to the intrepid, the dogged, and the resourceful. The surfer who through pluck and persistence found the perfect wave off an undiscovered stretch of beach is being elbowed out by the lazy masses who can discover the same wave with just a few mouse clicks. The commuter who pored over printed maps to find a short cut to work finds herself stuck in a jam with the GPS-enabled multitudes.

You have to wonder whether, as what was once opaque is made transparent, the bolder among us will lose the incentive to strike out for undiscovered territory. What’s the point when every secret becomes, in a real-time instant, common knowledge?

A see-through world may not be all that it’s cracked up to be. We may find that as we come to know everything about everything, we all end up in the same mess together.

News to me

Over at the Economist site, I’m debating the proposition “the internet is making journalism better, not worse” with Jay Rosen. He’s pro, I’m con.

Here’s my opening statement:

Journalism and the internet are both hot buttons, and when you combine the two you get plenty of opinions. But there are facts as well, and what the facts show is that the internet boom has done great damage to the journalism profession.

According to a 2010 review by the U.S. Congressional Research Service, newsroom staffing at American newspapers plunged by more than 25 percent between 2001 and 2009, and large-scale layoffs of reporters continued through 2010. A 2009 study commissioned by the Columbia Journalism Review concluded that newspaper editorial jobs dropped from more than 60,000 in 1992 to about 40,000 in 2009. Scores of newspapers, both large and small, have stopped publishing, and many others have scaled back the scope of their reporting. The picture appears similarly bleak in the U.K., where the number of working journalists fell by between 27 and 33 percent over the past decade, according to an analysis by the School of Journalism, Media & Communication at the University of Central Lancashire.

The decline in journalism jobs has been particularly severe at the local level, where reporters were scarce to begin with. A 400-page report issued last month by the Federal Communications Commission documents the consequences in distressing detail. The number of reporters covering state governments has dropped by a third since 2003, and more than 50 news organizations have discontinued statehouse reporting altogether. Cutbacks in reporting on city governments have been even steeper, and there have been significant declines in the number of journalists assigned to judicial, education, environment, and business beats as well as investigative reporting. “In many communities, we now face a shortage of local, professional, accountability reporting,” the FCC report concludes. “This is likely to lead to the kinds of problems that are, not surprisingly, associated with a lack of accountability—more government waste, more local corruption, less effective schools, and other serious community problems.”

The damage is not limited to newspapers. Newsmagazines, local commercial radio stations, and television networks have also slashed their newsgathering staffs since the 1980s, in some cases by 50 percent or more. The bottom line: Far fewer journalists are at work today than when the world wide web made its debut. The shrinking of the reporting corps not only constrains coverage; it also reduces quality, as remaining reporters become stretched thin even as they’re required to meet the relentless deadlines of online publishing. According to a 2010 survey by the Pew Research Center’s Project for Excellence in Journalism, 65 percent of news editors believe that the internet has led to a “loosening of standards” in journalism, with declines in accuracy and fact-checking and increases in unsourced reporting.

The problems can’t be blamed entirely on the net, of course. Like other industries, the press has suffered greatly from the recent recession, and mismanagement has also played a role in the travails of news organizations. But it is the shift of readers and advertisers from print media to online media that has been the major force reshaping the economics of the news business. The massive losses in print revenues, resulting from sharp declines in ads, subscriptions, and newsstand sales, have dwarfed the meager gains in online revenues. As the FCC report explains, “each print dollar [has been] replaced by four digital pennies.”

If we can agree that the internet, by altering the underlying economics of the news business, has thinned the ranks of professional journalists, then the next question is straightforward: Has the net created other modes of reporting to fill the gap? The answer, alas, is equally straightforward: No.

Certainly, the net has made it easier for ordinary citizens to be involved in journalism in all sorts of ways. Blogs and other online publishing and commenting tools allow people to share their opinions with a broad audience. Social networking services like Twitter and Facebook enable people to report breaking news, offer eyewitness accounts, and circulate links to stories. Groups of online volunteers have proven capable of digging newsworthy nuggets from large troves of raw data, whether it’s the expense reports of British politicians or the emails of Sarah Palin.

Such capabilities can be immensely valuable, but it’s important to recognize that they supplement rigorous, tenacious, in-depth reporting; they don’t replace it. And while there have been many noble attempts to create new kinds of net-based newsgathering organizations—some staffed by paid workers, others by volunteers; some for-profit, others not-for-profit—their successes so far have been modest and often fleeting. They have not come anywhere close to filling the gap left by the widespread loss of newspapers and reporters. As the Pew Center put it in its 2010 State of the News Media report, “the scale of these new efforts still amounts to a small fraction of what has been lost.”

The future may be sunnier. Professional news organizations may find ways to make more money online, and they may begin hiring again. Citizen journalism initiatives may begin to flourish on a large scale. Innovations in social networking may unlock entirely new ways to report and edit the news. But for the moment that’s all wishful thinking. What’s clear is that, up to now, the net has harmed journalism more than it’s helped it.

Here’s the debate site.

Semidelinkification, Shirky-style

Call me a nostalgist, but sometimes I like to plop my hoary frame down in front of the old desktop and surf the world wide web – the way we used to do back in the pre-Facebook days of my boyhood, when the internet was still tragically undermonetized. I was in fact on a little surfin’ safari this morning when I careened into a new post from Clay Shirky about – you guessed it – the future of the news biz.* It was totally longform, ie, interfrigginminable. But I did manage to read a sizable chunk of it before clicking the Instapaper “Read Later” button (a terrific way to avoid reading long stuff without having to feel guilty about it). It was a solid piece, as you’d expect from Shirky, if marred a bit by an unappealing new-media elitism (apparently the great unwashed never made it past the sports pages). But what interests me at the moment is not the content of Shirky’s post but its form, particularly the form of its linkage.

It’s been a while since I wrote about delinkification, but it’s still an issue I struggle with: How does one hang on to the benefits of having hyperlinks in online text while minimizing the distractions links cause to readers? Some people have taken to putting a list of sources, with links, at the foot of an online article or post, while leaving the main text unmolested. That works pretty well, but it strikes me as kind of cumbersome, and it also creates more work for the writer (which for a lazy s.o.b. like yours truly is a fatal flaw). You could also just dispense with links altogether – anyone who can’t by now Google a citation in two shakes is a moron – but for those of us who maintain a sentimental attachment to the idea of links as the coin of the internet realm (even while recognizing that the currency has been debased to near worthlessness), throwing in the towel on links seems like a moral failing.

But I like Shirky’s solution. He puts an asterisk at the end of a citation, and uses the asterisk as the link. I don’t know that it’s the best of all possible worlds, but it’s a nice mashup of the sedate footnote and the propulsive hyperlink. It’s much easier to tune out asterisks or other footnote marks than it is to tune out underscored, color-highlighted, in-your-face anchor text. And if you want to check out the cited document you still get the speed of the link. Click! Zoom! And you still make your little payment to the author of the cited work.

There was a time, many years ago, when having a crapload of links in a post or other piece of online prose was a sign that you were au courant – that you were down with this whole web thing. That time’s long gone. Arriving at a page covered with drips and drabs of blue link type is tiresome. (The equivalent today is using a Twitter hashtag to add a cute little ironic or sardonic comment at the end of a tweet. A year ago, the hashtag witticism was the mark of a hip tweetin’ dude. Now, it’s the mark of a dweeb.) It’s permissible these days – advisable, in fact – to offer a calmer reading experience to brain-addled netizens. Chill those pixels.

Given the revolting popularity of self-linking as a means to ratchet up page- and ad-views, I know that Shirky style and other forms of semidelinkification are unlikely to revolutionize the appearance of the web. So be it. I’m still going to go ahead and adopt Shirky style for my more discursive posts. For posts that exist purely to point to something interesting elsewhere on the net, I’ll continue to use trad text links. And I may change my mind and take a different direction in the future.

For the moment, though, Rough Type is officially shirkified.