I’ve been looking for good counterpoints to John Gray’s mind-altering book Straw Dogs since reading it a couple of years ago. Raymond Tallis provides one in his formidable critique of “neuroscientism” in The New Atlantis.
Here’s a drop from the bucket:
A good place to begin understanding why consciousness is not strictly reducible to the material is in looking at consciousness of material objects — that is, straightforward perception. Perception as it is experienced by human beings is the explicit sense of being aware of something material other than oneself. Consider your awareness of a glass sitting on a table near you. Light reflects from the glass, enters your eyes, and triggers activity in your visual pathways. The standard neuroscientific account says that your perception of the glass is the result of, or just is, this neural activity. There is a chain of causes and effects connecting the glass with the neural activity in your brain that is entirely compatible with, as in [Daniel] Dennett’s words, “the same physical principles, laws, and raw materials that suffice” to explain everything else in the material universe.
Unfortunately for neuroscientism, the inward causal path explains how the light gets into your brain but not how it results in a gaze that looks out. The inward causal path does not deliver your awareness of the glass as an item explicitly separate from you — as over there with respect to yourself, who is over here. This aspect of consciousness is known as intentionality (which is not to be confused with intentions). Intentionality designates the way that we are conscious of something, and that the contents of our consciousness are thus about something; and, in the case of human consciousness, that we are conscious of it as something other than ourselves. But there is nothing in the activity of the visual cortex, consisting of nerve impulses that are no more than material events in a material object, which could make that activity be about the things that you see. In other words, in intentionality we have something fundamental about consciousness that is left unexplained by the neurological account.
And lest we forget, amid all the clamor surrounding the McLuhan centennial, this week also marks the 45th anniversary of the death of Bobby Fuller at the age of 23, asphyxiated by gasoline fumes. The official cause of death was either suicide or accident – the coroner couldn’t decide – though many believe it was murder. I’m convinced that, like Robert Johnson before him, Fuller pawned his soul to the Devil, and the Devil collected on the loan.
As a footnote to my previous post on Marshall McLuhan and his legacy (tomorrow is the centenary of his birth), I share the following excerpt from a 1969 Playboy interview, in which he describes his vision of the end point of what we today call cloud computing. As is typical of McLuhan, there’s brilliance here, but there’s also a whole lot of bad craziness. At least I hope it’s bad craziness.
MCLUHAN: Automation and cybernation can play an essential role in smoothing the transition to the new society.
PLAYBOY: How?
MCLUHAN: The computer can be used to direct a network of global thermostats to pattern life in ways that will optimize human awareness. Already, it’s technologically feasible to employ the computer to program societies in beneficial ways.
PLAYBOY: How do you program an entire society – beneficially or otherwise?
MCLUHAN: There’s nothing at all difficult about putting computers in the position where they will be able to conduct carefully orchestrated programing of the sensory life of whole populations. I know it sounds rather science-fictional, but if you understood cybernetics you’d realize we could do it today. The computer could program the media to determine the given messages a people should hear in terms of their over-all needs, creating a total media experience absorbed and patterned by all the senses. We could program five hours less of TV in Italy to promote the reading of newspapers during an election, or lay on an additional 25 hours of TV in Venezuela to cool down the tribal temperature raised by radio the preceding month. By such orchestrated interplay of all media, whole cultures could now be programed in order to improve and stabilize their emotional climate, just as we are beginning to learn how to maintain equilibrium among the world’s competing economies [ha! -Rough Type].
PLAYBOY: How does such environmental programing, however enlightened in intent, differ from Pavlovian brainwashing?
MCLUHAN: Your question reflects the usual panic of people confronted with unexplored technologies. I’m not saying such panic isn’t justified, or that such environmental programing couldn’t be brainwashing, or far worse – merely that such reactions are useless and distracting. Though I think the programing of societies could actually be conducted quite constructively and humanistically, I don’t want to be in the position of a Hiroshima physicist extolling the potential of nuclear energy in the first days of August 1945. But an understanding of media’s effects constitutes a civil defense against media fallout.
The alarm of so many people, however, at the prospect of corporate programing’s creation of a complete service environment on this planet is rather like fearing that a municipal lighting system will deprive the individual of the right to adjust each light to his own favorite level of intensity. Computer technology can – and doubtless will – program entire environments to fulfill the social needs and sensory preferences of communities and nations. The content of that programing, however, depends on the nature of future societies – but that is in our own hands.
PLAYBOY: Is it really in our hands – or, by seeming to advocate the use of computers to manipulate the future of entire cultures, aren’t you actually encouraging man to abdicate control over his destiny?
MCLUHAN: First of all – and I’m sorry to have to repeat this disclaimer – I’m not advocating anything; I’m merely probing and predicting trends. Even if I opposed them or thought them disastrous, I couldn’t stop them, so why waste my time lamenting? As Carlyle said of author Margaret Fuller after she remarked, “I accept the Universe”: “She’d better.” I see no possibility of a worldwide Luddite rebellion that will smash all machinery to bits, so we might as well sit back and see what is happening and what will happen to us in a cybernetic world. Resenting a new technology will not halt its progress.
The point to remember here is that whenever we use or perceive any technological extension of ourselves, we necessarily embrace it. Whenever we watch a TV screen or read a book, we are absorbing these extensions of ourselves into our individual system and experiencing an automatic “closure” or displacement of perception; we can’t escape this perpetual embrace of our daily technology unless we escape the technology itself and flee to a hermit’s cave. By consistently embracing all these technologies, we inevitably relate ourselves to them as servomechanisms. Thus, in order to make use of them at all, we must serve them as we do gods. The Eskimo is a servomechanism of his kayak, the cowboy of his horse, the businessman of his clock, the cyberneticist – and soon the entire world – of his computer. In other words, to the spoils belongs the victor …
The machine world reciprocates man’s devotion by rewarding him with goods and services and bounty. Man’s relationship with his machinery is thus inherently symbiotic. This has always been the case; it’s only in the electric age that man has an opportunity to recognize this marriage to his own technology. Electric technology is a qualitative extension of this age-old man-machine relationship; 20th Century man’s relationship to the computer is not by nature very different from prehistoric man’s relationship to his boat or to his wheel – with the important difference that all previous technologies or extensions of man were partial and fragmentary, whereas the electric is total and inclusive. Now man is beginning to wear his brain outside his skull and his nerves outside his skin; new technology breeds new man. A recent cartoon portrayed a little boy telling his nonplused mother: “I’m going to be a computer when I grow up.” Humor is often prophecy.
The whole interview – it’s a whopper – can be read here.
These Economist debates seem to unspool in slo-mo. They’re a sin against realtime. But the third and final round of my debate with Jay Rosen on whether the net is making journalism better is now up.
Here’s my closing statement:
Like many who celebrate the net’s informational bounties, my opponent in this debate is a member of the online elite. He is a fixture on Twitter, having written, at last count, 16,963 tweets and garnered 61,765 followers. He is a prolific and popular blogger. He broadcasts his thoughts to the world through a FriendFeed account, a Facebook account, a Posterous account, a Tumblr account, a Storify account, a YouTube account and a Google+ account. And he has a weekly podcast. Jay Rosen is very much of the net.
I do not intend that as a criticism. Mr Rosen is plying his trade, and he is doing a fine job of it. On the internet, hyperactivity is no sin. But even though he has devoted so much time and energy to the online world, he has not been able to back up his defence of the net’s effects on journalism with facts. Instead, he continues to give us sunny platitudes and questionable generalisations. In his latest statement, he declares that “more people are consuming more [good journalism] than ever before”. That is a remarkably sweeping claim. What evidence does he supply to back it up? None.
I sense that Mr Rosen’s opinions about the state of journalism reflect the internet hothouse in which he spends his days. He sees a smattering of experiments in online reporting, few of which reach the masses, and he senses a renaissance in journalism. He sees a few dozen comments appended to an article, and he declares we are in the midst of a populist media revolution. He sees some nascent attempts to figure out how to pay for long-form journalism, and he senses an imminent widening of the national attention span. He calls journalism a “democratic beast”, but his “democracy” seems awfully narrow and awfully privileged.
Outside the new-media hothouse, people do not have the luxury of spending their waking hours tweeting, blogging, commenting, or cobbling together a Daily Me from a welter of sites and feeds. They are holding down jobs (or trying to find jobs). They have kids to raise, parents to care for, friends to keep up with, homes to clean. When they have spare time to catch up on the news, they often confront a wasteland. Their local paper has closed or atrophied. The newscasts on their local TV stations seem mainly concerned with murders, traffic jams and thunderstorms. Cable news shows present endless processions of blowhards. America’s once-mighty news magazines are out of business or spectres of their former selves.
In this light, Mr Rosen’s suggestion that “journalism, to be useful, needs not only to reach us with information, but to engage us in public argument” seems facile. Most people today would be happy with the information. And has the “public argument” really improved since the web’s arrival? It was loud and polarised before, and now it is louder and more polarised. The web rewards, with links and traffic, fervid expressions of ideological purity. We can see the result in Washington, where politicians preach, and tweet, to the converted, and the spirit of compromise, of appreciating an opponent’s point of view, is all but gone. We have no shortage of argument today. What we have is a shortage of good, unbiased reporting.
The drift towards our current state of affairs began long ago. But the web has accelerated the trend by making it much more difficult to keep a robust, even-handed news organisation in operation. Mr Rosen may be loath to admit it, but professional reporters are and will remain the main source of news. “In any community, journalists are the primary intermediaries for news,” wrote the Knight Commission on the Information Needs of Communities in a Democracy. “They ask tough questions. They chase obscure leads and confidential sources. They translate technical matters into clear prose. Where professionals are on the job, the public watchdog is well fed. Part-time, episodic or unco-ordinated public vigilance is not the same.” It is fine to talk about “news as a conversation”, but in the end what matters is how well journalism keeps the broad public informed and maintains a watchful eye on the powerful. By weakening those roles, the net has done great damage.
I understand how a member of the plugged-in elite would assume the internet has improved journalism. If you spend hours a day consuming news and producing opinions, the net provides you with endless choices, diversions and opportunities for self-expression. For the news junkie, the net is a crack house that dispenses its wares for free. But if you look beyond the elite, you see a citizenry starved of hard, objective reporting. For the typical person, the net’s disruptions have meant not a widening of options but a narrowing of them.
Mr Rosen is a skilled advocate for the net’s benefits. But praise of the gains needs to be tempered by an understanding of how the net has eroded journalism’s foundations. The damage is not over yet. Just last month, the Gannett chain announced the firing of 700 more employees at 80 community newspapers. If we are going to secure a better future for journalism, online and off, we need to be honest with ourselves about its present condition. We can begin by rejecting the motion before us.
I suspect it’s accurate to say that Kevin Kelly’s deep Christian faith makes him something of an outlier among the Bay Area tech set. It also adds some interesting layers and twists to his often brilliant thinking about technology, requiring him to wrestle with ambiguities and tensions that most in his cohort are blind to. In a new interview with Christianity Today, Kelly explains the essence of what the magazine refers to as his “geek theology”:
We are here to surprise God. God could make everything, but instead he says, “I bestow upon you the gift of free will so that you can participate in making this world. I could make everything, but I am going to give you some spark of my genius. Surprise me with something truly good and beautiful.” So we invent things, and God says, “Oh my gosh, that was so cool! I could have thought of that, but they thought of that instead.”
I confess I have a little trouble imagining God saying something like “Oh my gosh, that was so cool!” It makes me think that Kelly’s God must look like Jeff Spicoli:
But beyond the curious lingo, Kelly’s attempt to square Christianity with the materialist thrust of technological progress is compelling – and moving. If you’re going to have a geek theology, it seems wise to begin with a sense of the divinity of the act of making. In creating technology, then, we are elaborating, extending creation itself – carrying on God’s work, in Kelly’s view. Kelly goes on to offer what he terms “a technological metaphor for Jesus,” which stems from his experience watching computer game-makers create immersive virtual worlds and then enter the worlds they’ve created:
I had this vision of the unbounded God binding himself to his creation. When we make these virtual worlds in the future — worlds whose virtual beings will have autonomy to commit evil, murder, hurt, and destroy options — it’s not unthinkable that the game creator would go in to try to fix the world from the inside. That’s the story of Jesus’ redemption to me. We have an unbounded God who enters this world in the same way that you would go into virtual reality and bind yourself to a limited being and try to redeem the actions of the other beings since they are your creations … For some technological people, that makes [my] faith a little more understandable.
Kelly’s personal relationship to technology is complex. He may be a technophile in the abstract – a geek in the religious sense – but in his own life he takes a wary, skeptical view of new gadgets and other tools, resisting rather than giving in to their enchantments in order to protect his own integrity. Inspired by the example of the Amish, he is a technological minimalist: “I seek to find those technologies that assist me in my mission to express love and reflect God in the world, and then disregard the rest.” One senses here that Kelly is most interested in technological progress as a source of metaphor, a means of probing the mystery of existence. The interest is, oddly enough, a fundamentally literary one.
The danger with metaphor is that, like technology, it can be awfully seductive; it can skew one’s view of reality. In the interview, as in his recent, sweeping book, What Technology Wants, Kelly argues that technological progress is a force for good in the world, a force of “love,” because it serves to expand the choices available to human beings, to give people more “opportunities to express their unique set of God-given gifts.” Kelly therefore believes, despite his wariness about the effects of technology on his own life, that he has a moral duty to promote rapid technological innovation. If technology is love, then, by definition, the more of it, the better:
I want to increase all the things that help people discover and use their talents. Can you imagine a world where Mozart did not have access to a piano? I want to promote the invention of things that have not been invented yet, with a sense of urgency, because there are young people born today who are waiting upon us to invent their aids. There are Mozarts of this generation whose genius will be hidden until we invent their equivalent of a piano — maybe a holodeck or something. Just as you and I have benefited from the people who invented the alphabet, books, printing, and the Internet, we are obligated to materialize as many inventions as possible, to hurry, so that every person born and to-be-born will have a great chance of discovering and sharing their godly gifts.
There is a profound flaw in this view of progress. While I think that Kelly could make a strong case that technological progress increases the number of choices available to people in general, he goes beyond that to suggest that the process is continuously additive. Progress gives and never takes away. Each new technology means more choices for people. But that’s not true. When it comes to choices, technological progress both gives and takes away. It closes some possibilities even as it opens others. You can’t assume that, for any given child, technological advance will increase the likelihood that she will fulfill her natural potential – or, in Kelly’s words, discover and share her unique godly gifts. It may well reduce that likelihood.
The fallacy in Kelly’s thinking becomes quickly apparent if you look closely at his Mozart example (which he also uses in his book). The fact that Mozart was born after the invention of the piano and that the piano was essential to Mozart’s ability to fulfill his potential is evidence, according to Kelly’s logic, of the beneficence of progress. But while it’s true that if Mozart had been born 300 years earlier, the less advanced state of technological progress may have prevented him from fulfilling his potential, it’s equally true that if he had been born 300 years later, the more advanced state of technological progress would have equally prevented him from achieving his potential. It’s absurd to believe that if Mozart were living today, he would create the great works he created in the eighteenth century – the symphonies, the operas, the concertos. Technological progress has transformed the world, and turned it into a world that is less suited to an artist of Mozart’s talents.
Genius emerges at the intersection of unique individual human potential and unique temporal circumstances. As circumstances change, some people’s ability to fulfill their potential will increase, but other people’s will decrease. Progress does not simply expand options. It changes options, and along the way options are lost as well as gained. Homer lived in a world that we would call technologically primitive, yet he created immortal epic poems. If Homer were born today, he would not be able to compose those poems in his head. That possibility has been foreclosed by progress. For all we know, if Homer (or Mozart) were born today, he would end up being be an advertising copywriter, and perhaps not even a very good one.
Look at any baby born today, and try to say whether that child would have a greater possibility of fulfilling its human potential if during its lifetime (a) technological progress reversed, (b) technological progress stalled, (c) technological progress advanced slowly, or (d) technological progress accelerated quickly. You can’t. Because it’s unknowable.
The best you can argue, therefore, is that technological progress will, on balance, have a tendency to open more choices for more people. But that’s not a moral argument about the benefits of progress; it’s a practical argument, an argument based on calculations of utility. If, at the individual level, new technology may actual prevent people from discovering and sharing their “godly gifts,” then technology is not itself godly. Why would God thwart His own purposes? Technological progress is not a force of cosmic goodness, and it is surely not a force of cosmic love. It’s an entirely earthly force, as suspect as the flawed humans whose purposes it suits. Kelly’s belief that we are morally obligated “to materialize as many inventions as possible” and “to hurry” in doing so is not only based on a misperception; it’s foolhardy and dangerous.
This week — Thursday, July 21, to be precise — marks the 100th anniversary of Marshall McLuhan’s birth. Here are some thoughts on the man and his legacy.
One of my favorite YouTube videos is a clip from a 1968 Canadian TV show featuring a debate between Norman Mailer and Marshall McLuhan. The two men, both icons of the sixties, could hardly be more different. Leaning forward in his chair, Mailer is pugnacious, animated, engaged. McLuhan, abstracted and smiling wanly, seems to be on autopilot. He speaks in canned riddles. “The planet is no longer nature,” he declares, to Mailer’s uncomprehending stare; “it’s now the content of an art work.”
Watching McLuhan, you can’t quite decide whether he was a genius or just had a screw loose. Both impressions, it turns out, are valid. As the novelist Douglas Coupland argued in his recent biography, Marshall McLuhan: You Know Nothing of My Work!, McLuhan’s mind was probably situated at the mild end of the autism spectrum. He also suffered from a couple of major cerebral traumas. In 1960, he had a stroke so severe that he was given his last rites. In 1967, just a few months before the Mailer debate, surgeons removed a tumor the size of a small apple from the base of his brain. A later procedure revealed that McLuhan had an extra artery pumping blood into his cranium.
Between the stroke and the tumor, McLuhan managed to write a pair of extravagantly original books. The Gutenberg Galaxy, published in 1962, explored the cultural and personal consequences of the invention of the printing press, arguing that Gutenberg’s invention shaped the modern mind. Two years later, Understanding Media extended the analysis to the electric media of the twentieth century, which, McLuhan argued, were destroying the individualist ethic of print culture and turning the world into a tightly networked global village. The ideas in both books drew heavily on the works of other thinkers, including such contemporaries as Harold Innis, Albert Lord, and Wyndham Lewis, but McLuhan’s synthesis was, in content and tone, unlike anything that had come before.
When you read McLuhan today, you find all sorts of reasons to be impressed by his insight into media’s far-reaching effects and by his anticipation of the course of technological progress. When he looked at a Xerox machine in 1966, he didn’t just see the ramifications of cheap photocopying, as great as they were. He foresaw the transformation of the book from a manufactured object into an information service: “Instead of the book as a fixed package of repeatable and uniform character suited to the market with pricing, the book is increasingly taking on the character of a service, an information service, and the book as an information service is tailor-made and custom-built.” That must have sounded outrageous a half century ago. Today, with books shedding their physical skins and turning into software programs, it sounds like a given.
You also realize that McLuhan got a whole lot wrong. One of his central assumptions was that electric communication technologies would displace the phonetic alphabet from the center of culture, a process that he felt was well under way in his own lifetime. “Our Western values, built on the written word, have already been considerably affected by the electric media of telephone, radio, and TV,” he wrote in Understanding Media. He believed that readers, because their attention is consumed by the act of interpreting the visual symbols of alphabetic letters, become alienated from their other senses, sacrifice their attachment to other people, and enter a world of abstraction, individualism, and rigorously linear thinking. This, for McLuhan, was the story of Western civilization, particularly after the arrival of Gutenberg’s press.
By freeing us from our single-minded focus on the written word, new technologies like the telephone and the television would, he argued, broaden our sensory and emotional engagement with the world and with others. We would become more integrated, more “holistic,” at both a sensory and a social level, and we would recoup some of our primal nature. But McLuhan failed to anticipate that, as the speed and capacity of communication networks grew, what they would end up transmitting more than anything else is text. The written word would invade electric media. If McLuhan were to come back to life today, the sight of people using their telephones as reading and writing devices would blow his mind. He would also be amazed to discover that the fuzzy, low-definition TV screens that he knew (and on which he based his famous distinction between hot and cold media) have been replaced by crystal-clear, high-definition monitors, which more often that not are crawling with the letters of the alphabet. Our senses are more dominated by the need to maintain a strong, narrow visual focus than ever before. Electric media are social media, but they are also media of isolation. If the medium is the message, then the message of electric media has turned out to be far different from what McLuhan supposed.
Of course, the fact that some of his ideas didn’t pan out wouldn’t have bothered McLuhan much. He was far more interested in playing with ideas than nailing them down. He intended his writings to be “probes” into the present and the future. He wanted his words to knock readers out of their intellectual comfort zones, to get them to entertain the possibility that their accepted patterns of perception might need reordering. Fortunately for him, he arrived on the scene at a rare moment in history when large numbers of people wanted nothing more than to have their minds messed with.
McLuhan was a scholar of literature, with a doctorate from Cambridge, and his interpretation of the intellectual and social effects of media was richly allusive and erudite. But what particularly galvanized the public and the press was the weirdness of his prose. Perhaps a consequence of his unusual mind, he had a knack for writing sentences that sounded at once clinical and mystical. His books read like accounts of acid trips written by a bureaucrat. That kaleidoscopic, almost psychedelic style made him a darling of the counterculture — the bearded and the Birkenstocked embraced him as a guru — but it alienated him from his colleagues in academia. To them, McLuhan was a celebrity-seeking charlatan.
Neither his fans nor his foes saw him clearly. The central fact of McLuhan’s life was his conversion, at the age of twenty-five, to Catholicism, and his subsequent devotion to the religion’s rituals and tenets. He became a daily Mass-goer. Though he never discussed it, his faith forms the moral and intellectual backdrop to all his mature work. What lay in store, McLuhan believed, was the timelessness of eternity. The earthly conceptions of past, present, and future were by comparison of little consequence. His role as a thinker was not to celebrate or denigrate the world but simply to understand it, to recognize the patterns that would unlock history’s secrets and thus provide hints of God’s design. His job was not dissimilar, as he saw it, from that of the artist.
That’s not to say that McLuhan was without secular ambition. Coming of age at the dawn of mass media, he very much wanted to be famous. “I have no affection for the world,” he wrote to his brother in the late thirties, at the start of his academic career. But in the same letter he disclosed the “large dreams” he harbored for “the bedazzlement of men.” Modern media needed its own medium, the voice that would explain its transformative power to the world, and he would be it.
The tension between McLuhan’s craving for earthly attention and his distaste for the material world would never be resolved. Even as he came to be worshipped as a techno-utopian seer in the mid-sixties, he had already, writes Coupland, lost all hope “that the world might become a better place with new technology.” He heralded the global village, and was genuinely excited by its imminence and its possibilities, but he also saw its arrival as the death knell for the literary culture he revered. The electronically connected society would be the setting not for the further flourishing of civilization but for the return of tribalism, if on a vast new scale. “And as our senses [go] outside us,” he wrote, “Big Brother goes inside.” Always on display, always broadcasting, always watched, we would become mediated, technologically and socially, as never before. The intellectual detachment that characterizes the solitary thinker — and that was the hallmark of McLuhan’s own work — would be replaced by the communal excitements, and constraints, of what we have today come to call “interactivity.”
McLuhan also saw, with biting clarity, how all mass media are fated to become tools of commercialism and consumerism — and hence instruments of control. The more intimately we weave media into our lives, the more tightly we become locked in a corporate embrace: “Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.” Has a darker vision of modern media ever been expressed?
“Many people seem to think that if you talk about something recent, you’re in favor of it,” McLuhan explained during an uncharacteristically candid interview in 1966. “The exact opposite is true in my case. Anything I talk about is almost certain to be something I’m resolutely against, and it seems to me the best way of opposing it is to understand it, and then you know where to turn off the button.” Though the founders of Wired magazine would posthumously appoint McLuhan as the “patron saint” of the digital revolution, the real McLuhan was as much a Luddite as a technophile. He would have found the collective banality of Facebook abhorrent, if also fascinating.
In the fall of 1979, McLuhan suffered another major stroke, but this was one from which he would not recover. Though he regained consciousness, he remained unable to read, write, or speak until his death a little more than a year later. A lover of words — his favorite book was Joyce’s Finnegans Wake — he died in a state of wordlessness. He had fulfilled his own prophecy and become post-literary.
This post, along with seventy-eight others, is collected in the book Utopia Is Creepy.
My debate on the net’s effect on journalism with Jay Rosen has entered the second, rebuttal round over at the Economist’s site.
Here’s my rebuttal:
Jay Rosen grants that the internet has left us with “a weaker eye on power” while increasing “the supply of rubbish in and around journalism”. As a counterweight, he gives us ten reasons to be cheerful about journalism, most of which revolve around the “democratisation” of media. (I will resist the urge to point out how appropriate it is to provide a defence of the net’s effects on journalism in the form of a Top Ten list.)
I join Mr Rosen in applauding the way the net has reduced barriers to media participation. Having written a blog for many years, I can testify to the benefits of cheap digital publishing. But I do not take on faith the idea that democratising media necessarily improves journalism, and, unfortunately, Mr Rosen provides little in the way of facts to support his case. In place of hard evidence, we get dubious generalisations (“journalists are stronger and smarter when they are involved in the struggle for their own sustainability”), gauzy platitudes (“new life flows in through this opening”) and speculations (“data journalism is a huge opportunity”).
One of Mr Rosen’s most important claims crumbles when subjected to close scrutiny. He notes, correctly, that the net has dissolved the old geographic boundaries around news markets, making it easy for people to find stories from a variety of sources. But he then suggests that the effect, on the production side, has been to reduce redundant reporting, leading to less “pack journalism” and “a saner division of labour”. That would be nice if it were true, but it is not.
Much of what has been lost through the internet-driven winnowing of reporting staff is not duplicative effort but reporting in areas that were thinly covered to begin with: local and state governments, federal agencies, foreign affairs and investigative journalism. Having a strong, stable corps of reporters digging into these areas is crucial to having a well-informed citizenry, but since these forms of journalism tend to be expensive to produce and unattractive to online advertisers, they have suffered the heaviest cuts.
As Mr Rosen admits, coverage of state governments in America has eroded significantly. The number of journalists stationed in state capitols fell by a third between 2003 and 2009, creating big gaps in oversight. “In today’s capitol pressrooms,” American Journalism Review reports, “triage and narrowed priorities are the orders of the day.” The situation is similar with federal agencies in Washington, according to another AJR study. Between 2003 and 2010, the number of reporters at the Defence Department fell from 23 to 10; at the State Department from 15 to 9; at the Treasury Department from 12 to 6. “The watchdogs have abandoned their posts,” concludes the study, and “the quality of the reporting on the federal government has slipped.”
Foreign reporting, which is particularly expensive, has also suffered deep cuts. Over the past decade, nearly 20 American newspapers closed their foreign bureaus, and many others fired foreign correspondents. In Britain, daily newspapers have significantly curtailed their overseas reporting, according to a 2010 study by the Media Standards Trust, and alternative online sources are not taking up the slack. Research indicates that “the public do not seek out foreign news online”, according to the study. As foreign news is drained from the popular press, it becomes ever more the preserve of an elite.
If lone-wolf reporting is suffering in the web era, pack journalism is thriving, as evidenced by the swarming coverage of the Casey Anthony trial and the Anthony Weiner scandal. “The new paradox of journalism is more outlets covering fewer stories,” notes the Pew Project for Excellence in Journalism. What we are discovering is that in a world where advertisers pay by the click and readers read by the click, editorial attention and resources tend to become more concentrated and more keyed to spectacles. The upshot, contrary to Mr Rosen’s rosy assumption, is a division of journalistic labour that is even less sane than it used to be. Gadget blogs and gossip sites boom, while government beats go untrodden.
Despite the many experiments in online journalism, we have not found a substitute for the cross-subsidies that allowed newspapers to use the profits from popular features to pay for broad, in-depth reporting. The cross-subsidisation may have looked inefficient to economists, but as Clay Shirky, a media scholar, recently put it, “at least it worked”. Thanks to the net, it does not work any more.
It is easy to get caught up in the whirlwind of information that blows in great gusts through the internet. But we should remember that the primary function of journalism always has been and always will be the hard, skilled work of reporting the news. The subsequent sharing, tweeting, tagging, ranking, remixing and (yes) debating of the news are all important, but they are secondary functions—and, indeed, entirely dependent on primary reporting. Unless Mr Rosen can wave a magic wand and repair the damage that the internet has done to reporting and reporters, his argument that the net has improved journalism will remain an exercise in grasping at straws.