Technology as love

For a few years now, I’ve used summertime laziness as an excuse to recycle some of this blog’s old posts. The following post was originally published, under the ponderous headline “God, Kevin Kelly and the Myth of Choices,” in July of 2011. The influence of tools on human possibility is a central theme of The Glass Cage, so it was interesting for me to reread this post in the wake of writing the book. If I were to rewrite the post now, I would shift the focus away from technological progress as a force in itself and place a much greater emphasis on how the design of particular tools determines whether they open or foreclose opportunities and choices for their users.

I suspect it’s accurate to say that Kevin Kelly’s deep Christian faith makes him something of an outlier among the Bay Area tech set. It also adds some interesting layers and twists to his often brilliant thinking about technology, requiring him to wrestle with ambiguities and tensions that most in his cohort are blind to. In a new interview with Christianity Today, Kelly explains the essence of what the magazine refers to as his “geek theology”:

We are here to surprise God. God could make everything, but instead he says, “I bestow upon you the gift of free will so that you can participate in making this world. I could make everything, but I am going to give you some spark of my genius. Surprise me with something truly good and beautiful.” So we invent things, and God says, “Oh my gosh, that was so cool! I could have thought of that, but they thought of that instead.”

I confess I have a little trouble imagining God saying something like “Oh my gosh, that was so cool!” It makes me think that Kelly’s God must look like Jeff Spicoli:

spicolifasttimes

But beyond the curious lingo, Kelly’s attempt to square Christianity with the materialist thrust of technological progress is compelling – and moving. If you’re going to have a geek theology, it seems wise to begin with a sense of the divinity of the act of making. In creating technology, then, we are elaborating, extending creation itself – carrying on God’s work, in Kelly’s view. Kelly goes on to offer what he terms “a technological metaphor for Jesus,” which stems from his experience watching computer game-makers create immersive virtual worlds and then enter the worlds they’ve created:

I had this vision of the unbounded God binding himself to his creation. When we make these virtual worlds in the future — worlds whose virtual beings will have autonomy to commit evil, murder, hurt, and destroy options — it’s not unthinkable that the game creator would go in to try to fix the world from the inside. That’s the story of Jesus’ redemption to me. We have an unbounded God who enters this world in the same way that you would go into virtual reality and bind yourself to a limited being and try to redeem the actions of the other beings since they are your creations … For some technological people, that makes [my] faith a little more understandable.

Kelly’s personal relationship to technology is complex. He may be a technophile in the abstract – a geek in the religious sense – but in his own life he takes a wary, skeptical view of new gadgets and other tools, resisting rather than giving in to their enchantments in order to protect his own integrity. Inspired by the example of the Amish, he is a technological minimalist: “I seek to find those technologies that assist me in my mission to express love and reflect God in the world, and then disregard the rest.” One senses here that Kelly is most interested in technological progress as a source of metaphor, a means of probing the mystery of existence. The interest is, oddly enough, a fundamentally literary one.

The danger with metaphor is that, like technology, it can be awfully seductive; it can skew one’s view of reality. In the interview, as in his recent, sweeping book,What Technology Wants, Kelly argues that technological progress is a force for good in the world, a force of “love,” because it serves to expand the choices available to human beings, to give people more “opportunities to express their unique set of God-given gifts.” Kelly therefore believes, despite his wariness about the effects of technology on his own life, that he has a moral duty to promote rapid technological innovation. If technology is love, then, by definition, the more of it, the better:

I want to increase all the things that help people discover and use their talents. Can you imagine a world where Mozart did not have access to a piano? I want to promote the invention of things that have not been invented yet, with a sense of urgency, because there are young people born today who are waiting upon us to invent their aids. There are Mozarts of this generation whose genius will be hidden until we invent their equivalent of a piano — maybe a holodeck or something. Just as you and I have benefited from the people who invented the alphabet, books, printing, and the Internet, we are obligated to materialize as many inventions as possible, to hurry, so that every person born and to-be-born will have a great chance of discovering and sharing their godly gifts.

There is a profound flaw in this view of progress. While I think that Kelly could make a strong case that technological progress increases the number of choices available to people in general, he goes beyond that to suggest that the process is continuously additive. Progress gives and never takes away. Each new technology means more choices for people. But that’s not true. When it comes to choices, progress both gives and takes away. It closes some possibilities even as it opens others. You can’t assume that, for any given child, technological advance will increase the likelihood that she will fulfill her natural potential – or, in Kelly’s words, discover and share her unique godly gifts. It may well reduce that likelihood.

The fallacy in Kelly’s thinking becomes quickly apparent if you look closely at his Mozart example (which he also uses in his book). The fact that Mozart was born after the invention of the piano and that the piano was essential to Mozart’s ability to fulfill his potential is evidence, according to Kelly’s logic, of the beneficence of progress. But while it’s true that if Mozart had been born 300 years earlier, the less advanced state of technological progress may have prevented him from fulfilling his potential, it’s equally true that if he had been born 300 years later, the more advanced state of technological progress would have equally prevented him from achieving his potential. It’s absurd to believe that if Mozart were living today, he would create the great works he created in the eighteenth century – the symphonies, the operas, the concertos. Technological progress, among other forces, has transformed the world, and turned it into a world that is less suited to an artist of Mozart’s talents.

Genius emerges at the intersection of unique individual human potential and unique temporal circumstances. As circumstances change, some people’s ability to fulfill their potential will increase, but other people’s will decrease. Progress does not simply expand options. It changes options, and along the way options are lost as well as gained. Homer lived in a world that we would call technologically primitive, yet he created immortal epic poems. If Homer were born today, he would not be able to compose those poems in his head. That possibility has been foreclosed by progress. For all we know, if Homer (or Mozart) were born today, he would end up being an advertising copywriter, and perhaps not even a very good one.

Look at any baby born today, and try to say whether that child would have a greater possibility of fulfilling its human potential if during its lifetime (a) technological progress reversed, (b) technological progress stalled, (c) technological progress advanced slowly, or (d) technological progress accelerated quickly. You can’t. Because it’s unknowable.

The best you can argue, therefore, is that technological progress will, on balance, have a tendency to open more choices for more people. But that’s not a moral argument about the benefits of progress; it’s a practical argument, an argument based on calculations of utility. If, at the individual level, new technology may actual prevent people from discovering and sharing their “godly gifts,” then technology is not itself godly. Why would God thwart His own purposes? Technological progress is not a force of cosmic goodness, and it is surely not a force of cosmic love. It’s an entirely earthly force, as suspect as the flawed humans whose purposes it suits. Kelly’s belief that we are morally obligated “to materialize as many inventions as possible” and “to hurry” in doing so is not only based on a misperception; it’s foolhardy and dangerous.

Image: Still from the movie “Fast Times at Ridgemont High.”

From endless ladder to downward ramp

ramp

A couple of months ago, in the post “The Myth of the Endless Ladder,” I critiqued the widespread assumption that progress in production technology, such as advances in robotics and analytical software, inevitably “frees humans up to work on higher-value tasks,” in the words of economics reporter Annie Lowrey. While such a dynamic has often been true in the past, particularly in the middle years of the last century, there’s no guarantee that it will be true in the future. Evidence is growing, in fact, that a very different dynamic is now playing out, as computers take on more analytical and judgment-making tasks. In place of the endless ladder, we may now have what MIT economics professor and labor-market expert David Autor calls a “downward ramp.” The latest wave of automation technology appears to be “freeing us up” for less-interesting and less-challenging work.

In a New York Times column, Thomas Edsall points to new research, by economists Paul Beaudry, David Green, and Ben Sand, that suggests a widespread erosion in the skill levels of jobs since the year 2000. If in the 20 years leading up to the turn of the millennium we saw a “hollowing” of mid-skill jobs, with employment polarizing between low-skill and high-skill tasks, we now seem to be seeing a rapid loss of high-skill jobs as well. From top to bottom, the researchers report, workers are being pushed down the skill ramp:

After two decades of growth in the demand for occupations high in cognitive tasks, the US economy reversed and experienced a decline in the demand for such skills. The demand for cognitive tasks was to a large extent the motor of the US labor market prior to 2000. Once this motor reversed, the employment rate in the US economy started to contract. As we have emphasized, while this demand for cognitive tasks directly effects mainly high skilled workers, we have provided evidence that it has indirectly affected lower skill workers by pushing them out of jobs that have been taken up by higher skilled worker displaced from cognitive occupations. This has resulted in high growth in employment in low skilled manual jobs with declining wages in those occupations, and has pushed many low skill individuals out of the labor market.

Beaudry, Green, and Sand encapsulate the new deskilling trend in this remarkable chart, which documents the intellectual demands of the jobs taken by college graduates*:

downward ramp

Edsall reports that two other recent studies, one by Andrew Sum et al. and one by Lawrence Mishel et al., also find evidence of the deskilling trend among even the well-educated.

Comments MIT’s Andrew McAfee, co-author of The Second Machine Age:

This is bad news for several reasons. One of the most important is that the downward ramp appears to be leading to a “skills cascade” in which highly skilled / educated workers take jobs lower down the skill / wage ladder (since there’s not much demand at high levels), which in turn pushes less skilled workers even lower down the ladder, and so on. [Harvard economist] Larry Katz has found that “lots of new college graduates are moving into the service sector, that is, into traditionally non-college jobs, displacing young non-college workers.” Where this all ends is anyone’s guess.

At least one thing seems clear: The time has come to challenge not only the assumption that technological advances necessarily push people to higher-skilled work but also the self-serving Silicon Valley ideology that has wrapped itself around that assumption.

*Authors’ explanation of chart: “We plot the average cognitive task intensity of college graduates over the 1980- 2010 period. We measure cognitive intensity by assigning to each 4 digit occupation an average of their scores for cognitive tasks from the Dictionary of Occupation Titles (DOT). We define cognitive tasks as the non-routine analytic and interactive tasks used in Autor, Levy, and Murnane (2003) in their examination of the skill content of jobs. Movements in this cognitive task intensity index reflect movements in college educated workers across occupations. The figure indicates that average cognitive task intensity for college graduates increased from the early 1980s until about the year 2000 and then declined throughout the rest of the series.”

Image: “Guys and Bikes” by Astrid Westvang.

The ebook equilibrium

equilibrium

Last week, I gave a talk at the Digital Book Conference at Book Expo America (BEA) in New York. Here’s the text of my remarks.

Let me begin with a confession: I used to fear ebooks. You’ll be pleased to hear that I’ve gotten over that.

The change in my own attitude or perception reflects, I sense, some trends that have been unfolding recently in the marketplace. Actually, it would be more accurate to say “some trends that have stopped unfolding.” The big upheaval that followed Amazon’s introduction of the Kindle at the end of 2007 is settling down, and the contours of the post-ebook world are coming into focus. What’s surprising is that those contours don’t seem altogether different from those of the pre-ebook world. Much has changed, but a lot hasn’t.

Just a few years ago, when digital book sales were exploding and print sales slumping, it seemed a given that the ebook would do to the printed book what the MP3 did to the compact disc: obliterate it, or at least marginalize it. We were fated to see, in short order, the ebook become the dominant form of the book. The Gutenberg era would, after nearly half a millennium, come to a close.

I remember back in 2010 seeing an interview with Nicholas Negroponte, the founder of MIT’s Media Lab, in which he predicted, with supreme confidence, that printed books would be dead in five years. By 2015, ebooks would have taken over.

That prediction has turned out to be crazy. It’s safe to say that in 2015 plenty of people will be buying and reading printed books — considerably more than will be buying and reading electronic books. But the prediction didn’t seem entirely outlandish when Negroponte made it. Ebooks were on a tear in 2010. Sales more than tripled during the course of that year, after having already tripled in 2009, and they’d go on to double in 2011. Tripling, tripling, doubling: that’s enormous growth, even when starting from a small base. It was in May of 2011 — almost exactly three years ago — that Amazon announced that Kindle books were outselling print books on its site.

For lovers of the page, like myself, the ebook juggernaut provoked great unease about the future of the printed book — a bulwark of culture seemed to be crumbling. For lovers of the digital, like Negroponte, the same phenomenon provoked great euphoria — a bulwark of culture seemed to be crumbling. The way you see a bulwark depends on which side of it you’re on.

But even back then, there was something that had me scratching my head: the sales reports I was getting on my own books — nonfiction books — didn’t match up with everything I was hearing. There was a disconnect between the hype and the numbers. I had definitely seen a sizable bump in digital sales, but it was far from a takeover. For every ebook I was selling, I was selling about eight printed books, hardcover and paperback combined. So ebooks represented somewhere between 10 and 15 percent of my sales. That’s a healthy percentage — and I was grateful for it, given that the royalty on an ebook is considerably higher than for a paperback — but it was far from a dominant percentage.

What was even more curious was that the ebook share wasn’t growing much. After shooting up, it seemed to have quickly stabilized at around that 10 to 15 percent mark — and that’s pretty much where it still is. I don’t think I’m an outlier. Other nonfiction writers I’ve talked to say their ebook share of sales falls into the 10 to 20 percent range. Occasionally, for a particularly popular new book, the share will reach up into the twenties, but that seems fairly rare.

The apparent discrepancy no longer seems like a mystery. As the book market has settled down over the last two years, a new equilibrium has established itself. The growth in ebook sales has not just slowed, as it was fated to — as the law of big numbers tells us, you can only double or triple sales for so long before you run out of room — it has flattened out. Ebook sales growth has begun to track the overall growth rate of the market. The ebook market has matured, in other words, and it represents, depending on whose figures you look at, between 20 and 30 percent of the entire U.S. market.

Rather than wilting in the face of the ebook onslaught, sales of printed books have actually held up pretty well. Sales have fallen only modestly overall, and hardcover sales seem remarkably robust.

Don’t get me wrong. The ebook success story is a remarkable one. Ebooks have become a large, vibrant, and essential part of the book market. But they haven’t taken over. Neither fear nor euphoria seems in order anymore.

The ebook revolution, I would argue, isn’t much of a revolution. The book market has not been transformed in the way the music market has. The landscape still looks familiar.

What we’re discovering, in economic terms, is that the ebook is not a substitute, or replacement, for the printed book, as so many have either feared or hoped. Rather the ebook, like the audiobook before it, if on a different scale, is a complement to the printed book. Each form has its strengths and its weaknesses, each has its place. There are many people who have decided that they prefer reading books on screens. There are plenty more who have decided that they’ll stick with ink on paper. Still others are happy to switch between the formats, reading ebooks while scrunched into a plane seat, say, and reading print copies when sprawled on the couch at home.

Beyond the differences in personal preferences, sales breakdowns suggest that ebooks are well suited to certain kinds of reading — light fiction, for instance — but less well suited to other kinds of reading, such as literary fiction and nonfiction. That seems to be why mass-market paperbacks have taken a particular hit recently, while hardcovers and trade paperbacks have shown resilience.

These differences aren’t just a matter of the age of the reader. It’s not that older people are clinging desperately to print, while younger people are embracing digital. The average age of the print book buyer is 42; the average age of the ebook buyer is 41. Kids still like to read printed books, and surveys show that students prefer printed textbooks over electronic ones by a wide margin. No massive generational shift is under way.

For publishers, as for readers, the new equilibrium in the market has turned out to be a happy one. While ebooks have cannibalized some paperback sales, they’ve also brought new readers into the book market and expanded the purchases made by some existing readers. Many of the books that have been sold in digital form would not have been sold in print. Giving people more choice in how they read and buy books means that, other things being equal, they’ll probably read and buy more books. The fact that ebooks carry attractive profit margins provides a further bonus to publishers (though how that added margin will ultimately come to be divvied up remains very much in doubt).

I think it’s a happy equilibrium for writers, too. I’ve already mentioned that the royalties on ebooks are considerably more attractive than those on paperbacks. So as long as the hardcover market holds up, as it’s been doing, we’ll do okay. And for writers who haven’t had luck landing an agent or a publisher, self-published ebooks provide a new route to getting their work into the broad marketplace. That’s a good thing. Professional and independent publishing, which are often themselves portrayed as antagonistic, can and should be complementary. There’s plenty of room for both.

That’s the good news. I wish it were all the news. But it’s not.

The bad news is that there remains a fundamental and destructive tension between what I’ll call the culture of the book and the culture of the computer, and the ebook, lying between the two sides, is being pulled in both directions. Yes, you can read a book on a computer screen, but that doesn’t mean that the computer is a friend to the book. Book reading has never fit all that well into the world of mass media, and it fits even less well into the world of mass digital media. The book has become a countercultural object. To read a book today is to swim against society’s current.

The mind with which we read a book is very different from the mind with which we navigate our everyday lives. In our day-to-day routines, we’re always trying to manipulate or influence or otherwise act on our surroundings, whether it’s by turning a car’s steering wheel or frying an egg or tapping a button on a smartphone or tweeting a tweet. But when we open a book, our expectations and attitudes change. Because, as the University of Florida’s Norman Holland puts it, we understand that “we cannot or will not change the work of art by our actions,” we’re relieved of our desire to exert an influence over objects and people. As a result, we can “disengage our [cognitive] systems for initiating actions.” That disengagement from the busy world frees us to become absorbed in the act of reading. It’s only when we leave behind the incessant busyness of our lives in society that we open ourselves to a book’s power, that we become book readers.

That doesn’t mean that reading is anti-social. The central subject of literature is society, and when we lose ourselves in a book we often receive an education in the subtleties of human relations. Some studies suggest that reading tends to make us at least a little more empathetic, a little more alert to the inner lives of others. We retreat into a book to connect more deeply in the world outside the book.

If you have a smartphone or a tablet or a laptop with you this morning — and I’d be aghast if any of you didn’t — you know very well that the computer is not a tool for removing yourself from the busyness of your lives. It’s a tool for plunging you more deeply into the whirlpool. It’s a technology of action, reaction, and distraction, not a technology of repose and reflection.

If publishers have made a mistake in dealing with the rise of ebooks, it lies in ceding to internet and computer companies power over the formats and the sales of electronic books. Those companies may be necessary and valuable business partners, but their interests are not the interests of those who write, publish, lend, and read books. Their interests, and their profits, lie in promoting the culture of the computer, which means chipping away at the culture of the book. When a person is engrossed in a book, that person is not feeding data and money into the coffers of internet firms.

We see the divergence of interests not only in ugly battles over buy buttons and advance orders. We see it in the dramatic shift away from dedicated e-readers, like the early Kindle and Nook, to multitasking tablets like the iPad and the Fire. A dedicated e-reader is a relatively calm medium, one that suits and promotes deep reading. The multifunctional tablet does the opposite. It’s designed for busyness. If the dedicated e-reader brought the computer into the culture of the book, the tablet drags the book into the culture of the computer.

As one once-enthusiastic reader of ebooks recently told me, “When I sat down with my old Kindle, I thought about books. When I turn on my Kindle Fire, I think about everything but books.”

The great challenge for publishers and librarians and writers today is to defend the culture of the book, whether the book manifests itself in pages or in pixels. Defending the culture of the book means not giving way to the culture of the computer, the culture of busyness and distraction. It means protecting the repose of the reader. It means resisting the urge to “enhance” the book by bringing new software functionality into its pages. And it means fighting the forces of hegemony in formats, in devices, and in retailing. This is a power struggle, as much cultural as financial, and it’s going to go on for a long time.

Whatever the dreams that people like Jeff Bezos and Larry Page and Mark Zuckerberg dream, they are not the dreams of readers.

Thank you.

Image: “A Kind of Regression,” by Ines Seidel.

Let them eat images of cake

starchild

David Graeber observes:

It used to be that Americans mostly subscribed to a rough-and-ready version of the labor theory of value. Everything we see around us that we consider beautiful, useful, or important was made that way by people who sank their physical and mental efforts into creating and maintaining it. Work is valuable insofar as it creates these things that people like and need. Since the beginning of the 20th century, there has been an enormous effort on the part of the people running this country to turn that around: to convince everyone that value really comes from the minds and visions of entrepreneurs, and that ordinary working people are just mindless robots who bring those visions to reality.

Not only does it make perfect sense, therefore, to replace all those working stiffs, all those glorified ditch-diggers who traffic in the stuff of the world, with actual mindless robots, but in doing so you’re doing the workers a great, if as yet unappreciated, favor. You’re liberating them to become . . . visionaries! “Unemployment” is just a coarse term we use to describe the pre-visionary state. And so Andreessen: “All human time, labor, energy, ambition, and goals reorient to the intangibles: the big questions, the deep needs.” Intangibility is the last refuge of the materialist.

Image of starchild from 2001.

Marx Andreessen

In a series of rhapsodic tweets, venture capitalist Marc Andreessen imagines a world in which robots take over all productive labor:

All human time, labor, energy, ambition, and goals reorient to the intangibles: the big questions, the deep needs. Human nature expresses itself fully, for the first time in history. Without physical need constraints, we will be whoever we want to be. The main fields of human endeavor will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure. Rather than nothing to do, we would have everything to do: curiosity, artistic and scientific creativity, new forms of status seeking. Imagine six, or 10, billion people doing nothing but arts and sciences, culture and exploring and learning. What a world that would be.

What a world, indeed. It would, in fact, be precisely the world that Karl Marx dreamed about, where “nobody has one exclusive sphere of activity but each can become accomplished in any branch he wished.” Marx, too, believed that modern production technology would be instrumental in liberating people from the narrowness of traditional jobs, freeing human nature to express itself fully for the first time in history.

We know the process by which Marx saw his utopia of self-actualization come into being. I wonder how Andreessen would go about making his utopia operational. Would he begin by distributing his own wealth to the masses?

The eunuch’s children

Cai-Lun-Stamp

1. Pulp Fact

Gutenberg we know. But what of the eunuch Cai Lun?

A well-educated, studious young man, a close aide to the Emperor Hedi in the Chinese imperial court of the Eastern Han Dynasty, Cai invented paper one fateful day in the year 105. At the time, writing and drawing were done primarily on silk, which was elegant but expensive, or on bamboo, which was sturdy but cumbersome. Seeking a more practical alternative, Cai came up with the idea of mashing bits of tree bark and hemp fiber together in a little water, pounding the resulting paste flat with a stone mortar, and then letting it dry into sheets in the sun. The experiment was a success. Allowing for a few industrial tweaks, Cai’s method is still pretty much the way paper gets made today.

Cai killed himself some years later, having become entangled in a palace scandal from which he saw no exit. But his invention took on a life of its own. The craft of papermaking spread quickly throughout China and then, following the Silk Road westward, made its way into Persia, Arabia, and Europe. Within a few centuries, paper had replaced animal skins, papyrus mats, and wooden tablets as the world’s preferred medium for writing and reading. The goldsmith Gutenberg would, with his creation of the printing press around 1450, mechanize the work of the scribe, replacing inky fingers with inky machines, but it was Cai Lun who gave us our reading material and, some would say, our world.

2. Peak Paper

Paper may be the single most versatile invention in history, its uses extending from the artistic to the bureaucratic to the hygienic. Rarely, though, do we give it its due. The ubiquity and disposability of the stuff — the average American goes through a quarter ton of it every year — lead us to take it for granted, or even to resent it. It’s hard to respect something that you’re forever throwing in the trash or flushing down the john or blowing your nose into. But modern life is inconceivable without paper. If paper were to disappear, writes Ian Sansom in his recent book Paper: An Elegy, “Everything would be lost.”

But wait. “An elegy”? Sansom’s subtitle is half joking, but it’s half serious, too. For while paper will be around as long as we’re around, with the digital computer we have at last come up with an invention to rival Cai Lun’s. Over the last decade, annual per-capita paper consumption in developed countries has fallen sharply. If the initial arrival of the personal computer and its companion printer had us tearing through more reams than ever, the rise of the internet as a universal communication system seems to be having the opposite effect. As more and more information comes to be stored and exchanged electronically, we’re writing fewer checks, sending fewer letters, circulating fewer reports, and in general committing fewer thoughts to paper. Even our love notes are passed between servers.

In 1894, Scribner’s Magazine published an essay by the French litterateur Octave Uzanne titled “The End of Books.” Thomas Edison had just invented the phonograph, and Uzanne thought it inevitable that books and periodicals would soon be replaced by “various devices for registering sound” that people would carry around with them. Flipping through printed sheets of paper demanded far too much effort from the modern “man of leisure,” he argued. “Reading, as we practice it today, soon brings on great weariness; for not only does it require of the brain a sustained attention which consumes a large proportion of the cerebral phosphates, but it also forces our bodies into various fatiguing attitudes.” The printing press and its quaint products were no match for modern technology.

tubes

You have to hand it to Uzanne. He anticipated the arrival of the audiobook, the iPod, and even the smartphone. About the obsolescence of the printed page, however, he was entirely wrong. Yet his prophesy would enjoy continuing popularity among the intelligentsia. It would come to be repeated over and over again during the twentieth century. Every time a new communication medium came along — radio, telephone, cinema, TV, CD-ROM — pundits would send out, usually in printed form, another death notice for the press. H. G. Wells wrote a book proclaiming that microfilm would replace the book.

In 2011, the Edinburgh International Book Festival featured a session titled — why mess with a winner? — “The End of Books.” One of the participants, the Scottish novelist Ewan Morrison, declared that “within 25 years the digital revolution will bring about the end of paper books.” Baby boomers, it seemed obvious to Morrison, would be the last generation to read words inked on pages. The future of the book and the magazine and the newspaper — the future of the word — lay in “e-publishing.” The argument seemed entirely reasonable at the time. Unlike Uzanne, who was merely speculating, Morrison could point to hard facts about trends in reading and publishing. People were flocking to the screen. Paper was toast.

Now, just three years later, the picture has grown blurrier. There are new facts, equally hard, which suggest that words will continue to appear on sheets of paper for a good long while. Ebook sales, which skyrocketed after the launch of Amazon’s Kindle in late 2007, have fallen back to earth in recent months, and sales of physical books have remained surprisingly resilient. Printed books still account for about three-quarters of overall book sales in the United States, and if sales of used books, which have been booming, are taken into account, that percentage probably rises even higher. A recent survey revealed that even the biggest fans of e-books continue to purchase a lot of printed volumes.

Periodicals have had a harder go of it, thanks to the profusion of free alternatives online and the steep declines in print advertising. But subscriptions to print magazines seem to be stabilizing. Although some publications are struggling to survive, others are holding on to their readers. Digital subscriptions, while growing smartly, still represent only a tiny slice of the market, and a lot of magazine readers don’t seem eager to switch to e-versions. A survey of owners of iPads and other tablet computers, conducted last year, found that three-quarters of them still prefer to read magazines on paper. There are even some glimmers in the beleaguered newspaper business. The spread of paywalls and the bundling of print and digital subscriptions appear to be tempering the long-term decline in print circulation. A few major papers have even gained some print readers of late.

What’s striking is that the prospects for print have improved even as the use of media-friendly mobile computers and apps has exploded. If physical publications were dying, you would think their condition should be deteriorating rapidly now, not stabilizing.

3. Embodied Words

Our eyes tell us that the words and pictures on a screen are pretty much identical to the words and pictures on a piece of paper. But our eyes lie. What we’re learning now is that reading is a bodily activity. We take in information the way we experience the world — as much with our sense of touch as with our sense of sight. Some scientists believe that our brain actually interprets written letters and words as physical objects, a reflection of the fact that our minds evolved to perceive things, not symbols of things.

The differences between page and screen go beyond the simple tactile pleasures of good paper stock. To the human mind, a sequence of pages bound together into a physical object is very different from a flat screen that displays only a single “page” of information at a time. The physical presence of the printed pages, and the ability to flip back and forth through them, turns out to be important to the mind’s ability to navigate written works, particularly lengthy and complicated ones. Even though we don’t realize it consciously, we quickly develop a mental map of the contents of a printed text, as if its argument or story were a voyage unfolding through space. If you’ve ever picked up a book you read long ago and discovered that your hands were able to locate a particular passage quickly, you’ve experienced this phenomenon. When we hold a physical publication in our hands, we also hold its contents in our mind.

The spatial memories seem to translate into more immersive reading and stronger comprehension. A recent experiment conducted with young readers in Norway found that, with both expository and narrative works, people who read from pages understood the text better than those who read the same material on a screen. The findings are consistent with a series of other recent reading studies. “We know from empirical and theoretical research that having a good spatial mental representation of the physical layout of the text supports reading comprehension,” wrote the Norwegian researchers. They suggested that the ability of print readers to “see as well as tactilely feel the spatial extension and physical dimensions” of an entire text likely played a role in their superior comprehension.

That may also explain why surveys in the United States and other countries show that college students continue to prefer printed textbooks over electronic ones by wide margins. Students say that traditional books are more flexible as study tools, encourage deeper and more attentive reading, and promote better understanding and retention of the material. It seems to be true, as Octave Uzanne suggested, that reading printed publications consumes a lot of “cerebral phosphates.” But maybe that’s something to be celebrated.

Electronic books and periodicals have advantages of their own, of course. They’re convenient. They often provide links to other relevant publications. Their contents can be searched and shared easily. They can include animations, audio and video snippets, and interactive features. They can be updated on the fly. When it comes to brief news reports or other simple stories, or works that we just want to glance at rather than read carefully, electronic versions may well be superior to printed ones.

We were probably mistaken to think of electronic publications as substitutes for printed ones. They seem to be different things, suited to different kinds of reading and providing different sorts of aesthetic and intellectual experiences. Some readers may continue to prefer print, others may develop a particular taste for the digital, and still others may happily switch back and forth between the two forms. This year in the United States, some two billion books and 350 million magazines will roll off presses and into people’s hands. We are still Cai Lun’s children.

This essay appeared originally, in a slightly different form, in the journal NautilusImages: postage stamp commemorating Cai Lun, issued in China in 1962; illustration from 1894 article “The End of Books.”