I feel measurably less emotional now


Sheryl Sandberg, Facebook’s COO, responds to the uproar about the company’s clandestine psychological experiment on its members:

“This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated. And for that communication we apologize. We never meant to upset you.”

So an experiment designed to explore how the delivery of information can be programmed to manipulate people’s emotional states was just part of routine product-development testing? No worries. I apologize for getting upset.

Image: Detail of Andre Brouillet’s “Une Leçon Clinique à la Salpêtrière

1 Comment

Filed under Uncategorized

An android dreams of automation


Google’s Android guru, Sundar Pichai, provides a peek into the company’s conception of our automated future:

“Today, computing mainly automates things for you, but when we connect all these things, you can truly start assisting people in a more meaningful way,” Mr. Pichai said. He suggested a way for Android on people’s smartphones to interact with Android in their cars. “If I go and pick up my kids, it would be good for my car to be aware that my kids have entered the car and change the music to something that’s appropriate for them,” Mr. Pichai said.

What’s illuminating is not the triviality of Pichai’s scenario — that billions of dollars might be invested in developing a system that senses when your kids get in your car and then seamlessly cues up “Baby Beluga” — but what the urge to automate small, human interactions reveals about Pichai and his colleagues. With this offhand example, Pichai gives voice to Silicon Valley’s reigning assumption, which can be boiled down to this: Anything that can be automated should be automated. If it’s possible to program a computer to do something a person can do, then the computer should do it. That way, the person will be “freed up” to do something “more valuable.” Completely absent from this view is any sense of what it actually means to be a human being. Pichai doesn’t seem able to comprehend that the essence, and the joy, of parenting may actually lie in all the small, trivial gestures that parents make on behalf of or in concert with their kids — like picking out a song to play in the car. Intimacy is redefined as inefficiency.

I guess it’s no surprise that what Pichai expresses is a robot’s view of technology in general and automation in particular — mindless, witless, joyless; obsessed with productivity, oblivious to life’s everyday textures and pleasures. But it is telling. What should be automated is not what can be automated but what should be automated.

Image: “Communicating with the Beluga” by Bob.


Filed under The Glass Cage

The quarter-of-a-second rule


Mother Jones excerpts my brief essay on the malleability of our sense of time, “The Patience Deficit,” from the anthology What Should We Be Worried About? Here’s the essay’s first paragraph:

I’m concerned about time — the way we’re warping it and it’s warping us. Human beings, like other animals, seem to have remarkably accurate internal clocks. Take away our wristwatches and our cell phones and we can still make pretty good estimates about time intervals. But that faculty can also be easily distorted. Our perception of time is subjective; it changes with our circumstances and our experiences. When things are happening quickly all around us, delays that would otherwise seem brief begin to seem interminable. Seconds stretch out. Minutes go on forever. “Our sense of time,” observed William James in his 1890 masterwork The Principles of Psychology, “seems subject to the law of contrast.”

Read on.

Image: detail of “Forest (4)” by Gerhard Richter.

Comments Off

Filed under Uncategorized

Bringing economics into the world


Throwing his considerable weight behind the post-autistic economics movement, Robert Skidelsky offers a calm but blistering critique of the “mainstream economics” curriculum that has come to dominate university teaching. Arguing that mainstream economics, with its pseudo-scientific mathematical models, is at heart an “ideology of the free market” that can circumscribe thinking and excuse failed policies, Skidelsky argues that the context of economics teaching needs to be broadened to include history, philosophy, politics, and psychology — to reflect the true economic lives of people.

It has become an article of faith that any move toward a more open or “pluralist” approach to economics portends regression to “pre-scientific” modes of thought, just as the results of the European Parliament election threaten to revive a more primitive mode of politics. Yet institutions and ideologies cannot survive by mere incantation or reminders of past horrors. They have to address and account for the contemporary world of lived experience. For now, the best that curriculum reform can do is to remind students that economics is not a science like physics, and that it has a much richer history than is to be found in the standard textbooks.

I suspect that Skidelsky’s piece will provoke a productive debate. Brad Delong has already responded:

We have no business offering a narrow economics B.A. at all. At the undergraduate social-science level, the right way of organizing a major curriculum is to offer some flavor of history and moral philosophy: enough history that students are not ignorant, enough sociology and anthropology that students are not morons, and enough politics and philosophy that students are not fools. (And, I would say, a double dose of economics to ensure that majors understand what is key about our civilization and do not get the incidence of everything wrong.)

And here (pdf download) is the report of the Post-Crash Economics Society that spurred Skidelsky’s comments.

Via The Browser. Image by Penguincakes.


Filed under Uncategorized

Technology as love

For a few years now, I’ve used summertime laziness as an excuse to recycle some of this blog’s old posts. The following post was originally published, under the ponderous headline “God, Kevin Kelly and the Myth of Choices,” in July of 2011. The influence of tools on human possibility is a central theme of The Glass Cage, so it was interesting for me to reread this post in the wake of writing the book. If I were to rewrite the post now, I would shift the focus away from technological progress as a force in itself and place a much greater emphasis on how the design of particular tools determines whether they open or foreclose opportunities and choices for their users.

I suspect it’s accurate to say that Kevin Kelly’s deep Christian faith makes him something of an outlier among the Bay Area tech set. It also adds some interesting layers and twists to his often brilliant thinking about technology, requiring him to wrestle with ambiguities and tensions that most in his cohort are blind to. In a new interview with Christianity Today, Kelly explains the essence of what the magazine refers to as his “geek theology”:

We are here to surprise God. God could make everything, but instead he says, “I bestow upon you the gift of free will so that you can participate in making this world. I could make everything, but I am going to give you some spark of my genius. Surprise me with something truly good and beautiful.” So we invent things, and God says, “Oh my gosh, that was so cool! I could have thought of that, but they thought of that instead.”

I confess I have a little trouble imagining God saying something like “Oh my gosh, that was so cool!” It makes me think that Kelly’s God must look like Jeff Spicoli:


But beyond the curious lingo, Kelly’s attempt to square Christianity with the materialist thrust of technological progress is compelling – and moving. If you’re going to have a geek theology, it seems wise to begin with a sense of the divinity of the act of making. In creating technology, then, we are elaborating, extending creation itself – carrying on God’s work, in Kelly’s view. Kelly goes on to offer what he terms “a technological metaphor for Jesus,” which stems from his experience watching computer game-makers create immersive virtual worlds and then enter the worlds they’ve created:

I had this vision of the unbounded God binding himself to his creation. When we make these virtual worlds in the future — worlds whose virtual beings will have autonomy to commit evil, murder, hurt, and destroy options — it’s not unthinkable that the game creator would go in to try to fix the world from the inside. That’s the story of Jesus’ redemption to me. We have an unbounded God who enters this world in the same way that you would go into virtual reality and bind yourself to a limited being and try to redeem the actions of the other beings since they are your creations … For some technological people, that makes [my] faith a little more understandable.

Kelly’s personal relationship to technology is complex. He may be a technophile in the abstract – a geek in the religious sense – but in his own life he takes a wary, skeptical view of new gadgets and other tools, resisting rather than giving in to their enchantments in order to protect his own integrity. Inspired by the example of the Amish, he is a technological minimalist: “I seek to find those technologies that assist me in my mission to express love and reflect God in the world, and then disregard the rest.” One senses here that Kelly is most interested in technological progress as a source of metaphor, a means of probing the mystery of existence. The interest is, oddly enough, a fundamentally literary one.

The danger with metaphor is that, like technology, it can be awfully seductive; it can skew one’s view of reality. In the interview, as in his recent, sweeping book,What Technology Wants, Kelly argues that technological progress is a force for good in the world, a force of “love,” because it serves to expand the choices available to human beings, to give people more “opportunities to express their unique set of God-given gifts.” Kelly therefore believes, despite his wariness about the effects of technology on his own life, that he has a moral duty to promote rapid technological innovation. If technology is love, then, by definition, the more of it, the better:

I want to increase all the things that help people discover and use their talents. Can you imagine a world where Mozart did not have access to a piano? I want to promote the invention of things that have not been invented yet, with a sense of urgency, because there are young people born today who are waiting upon us to invent their aids. There are Mozarts of this generation whose genius will be hidden until we invent their equivalent of a piano — maybe a holodeck or something. Just as you and I have benefited from the people who invented the alphabet, books, printing, and the Internet, we are obligated to materialize as many inventions as possible, to hurry, so that every person born and to-be-born will have a great chance of discovering and sharing their godly gifts.

There is a profound flaw in this view of progress. While I think that Kelly could make a strong case that technological progress increases the number of choices available to people in general, he goes beyond that to suggest that the process is continuously additive. Progress gives and never takes away. Each new technology means more choices for people. But that’s not true. When it comes to choices, progress both gives and takes away. It closes some possibilities even as it opens others. You can’t assume that, for any given child, technological advance will increase the likelihood that she will fulfill her natural potential – or, in Kelly’s words, discover and share her unique godly gifts. It may well reduce that likelihood.

The fallacy in Kelly’s thinking becomes quickly apparent if you look closely at his Mozart example (which he also uses in his book). The fact that Mozart was born after the invention of the piano and that the piano was essential to Mozart’s ability to fulfill his potential is evidence, according to Kelly’s logic, of the beneficence of progress. But while it’s true that if Mozart had been born 300 years earlier, the less advanced state of technological progress may have prevented him from fulfilling his potential, it’s equally true that if he had been born 300 years later, the more advanced state of technological progress would have equally prevented him from achieving his potential. It’s absurd to believe that if Mozart were living today, he would create the great works he created in the eighteenth century – the symphonies, the operas, the concertos. Technological progress, among other forces, has transformed the world, and turned it into a world that is less suited to an artist of Mozart’s talents.

Genius emerges at the intersection of unique individual human potential and unique temporal circumstances. As circumstances change, some people’s ability to fulfill their potential will increase, but other people’s will decrease. Progress does not simply expand options. It changes options, and along the way options are lost as well as gained. Homer lived in a world that we would call technologically primitive, yet he created immortal epic poems. If Homer were born today, he would not be able to compose those poems in his head. That possibility has been foreclosed by progress. For all we know, if Homer (or Mozart) were born today, he would end up being an advertising copywriter, and perhaps not even a very good one.

Look at any baby born today, and try to say whether that child would have a greater possibility of fulfilling its human potential if during its lifetime (a) technological progress reversed, (b) technological progress stalled, (c) technological progress advanced slowly, or (d) technological progress accelerated quickly. You can’t. Because it’s unknowable.

The best you can argue, therefore, is that technological progress will, on balance, have a tendency to open more choices for more people. But that’s not a moral argument about the benefits of progress; it’s a practical argument, an argument based on calculations of utility. If, at the individual level, new technology may actual prevent people from discovering and sharing their “godly gifts,” then technology is not itself godly. Why would God thwart His own purposes? Technological progress is not a force of cosmic goodness, and it is surely not a force of cosmic love. It’s an entirely earthly force, as suspect as the flawed humans whose purposes it suits. Kelly’s belief that we are morally obligated “to materialize as many inventions as possible” and “to hurry” in doing so is not only based on a misperception; it’s foolhardy and dangerous.

Image: Still from the movie “Fast Times at Ridgemont High.”


Filed under Uncategorized

From endless ladder to downward ramp


A couple of months ago, in the post “The Myth of the Endless Ladder,” I critiqued the widespread assumption that progress in production technology, such as advances in robotics and analytical software, inevitably “frees humans up to work on higher-value tasks,” in the words of economics reporter Annie Lowrey. While such a dynamic has often been true in the past, particularly in the middle years of the last century, there’s no guarantee that it will be true in the future. Evidence is growing, in fact, that a very different dynamic is now playing out, as computers take on more analytical and judgment-making tasks. In place of the endless ladder, we may now have what MIT economics professor and labor-market expert David Autor calls a “downward ramp.” The latest wave of automation technology appears to be “freeing us up” for less-interesting and less-challenging work.

In a New York Times column, Thomas Edsall points to new research, by economists Paul Beaudry, David Green, and Ben Sand, that suggests a widespread erosion in the skill levels of jobs since the year 2000. If in the 20 years leading up to the turn of the millennium we saw a “hollowing” of mid-skill jobs, with employment polarizing between low-skill and high-skill tasks, we now seem to be seeing a rapid loss of high-skill jobs as well. From top to bottom, the researchers report, workers are being pushed down the skill ramp:

After two decades of growth in the demand for occupations high in cognitive tasks, the US economy reversed and experienced a decline in the demand for such skills. The demand for cognitive tasks was to a large extent the motor of the US labor market prior to 2000. Once this motor reversed, the employment rate in the US economy started to contract. As we have emphasized, while this demand for cognitive tasks directly effects mainly high skilled workers, we have provided evidence that it has indirectly affected lower skill workers by pushing them out of jobs that have been taken up by higher skilled worker displaced from cognitive occupations. This has resulted in high growth in employment in low skilled manual jobs with declining wages in those occupations, and has pushed many low skill individuals out of the labor market.

Beaudry, Green, and Sand encapsulate the new deskilling trend in this remarkable chart, which documents the intellectual demands of the jobs taken by college graduates*:

downward ramp

Edsall reports that two other recent studies, one by Andrew Sum et al. and one by Lawrence Mishel et al., also find evidence of the deskilling trend among even the well-educated.

Comments MIT’s Andrew McAfee, co-author of The Second Machine Age:

This is bad news for several reasons. One of the most important is that the downward ramp appears to be leading to a “skills cascade” in which highly skilled / educated workers take jobs lower down the skill / wage ladder (since there’s not much demand at high levels), which in turn pushes less skilled workers even lower down the ladder, and so on. [Harvard economist] Larry Katz has found that “lots of new college graduates are moving into the service sector, that is, into traditionally non-college jobs, displacing young non-college workers.” Where this all ends is anyone’s guess.

At least one thing seems clear: The time has come to challenge not only the assumption that technological advances necessarily push people to higher-skilled work but also the self-serving Silicon Valley ideology that has wrapped itself around that assumption.

*Authors’ explanation of chart: “We plot the average cognitive task intensity of college graduates over the 1980- 2010 period. We measure cognitive intensity by assigning to each 4 digit occupation an average of their scores for cognitive tasks from the Dictionary of Occupation Titles (DOT). We define cognitive tasks as the non-routine analytic and interactive tasks used in Autor, Levy, and Murnane (2003) in their examination of the skill content of jobs. Movements in this cognitive task intensity index reflect movements in college educated workers across occupations. The figure indicates that average cognitive task intensity for college graduates increased from the early 1980s until about the year 2000 and then declined throughout the rest of the series.”

Image: “Guys and Bikes” by Astrid Westvang.


Filed under The Glass Cage

The ebook equilibrium


Last week, I gave a talk at the Digital Book Conference at Book Expo America (BEA) in New York. Here’s the text of my remarks.

Let me begin with a confession: I used to fear ebooks. You’ll be pleased to hear that I’ve gotten over that.

The change in my own attitude or perception reflects, I sense, some trends that have been unfolding recently in the marketplace. Actually, it would be more accurate to say “some trends that have stopped unfolding.” The big upheaval that followed Amazon’s introduction of the Kindle at the end of 2007 is settling down, and the contours of the post-ebook world are coming into focus. What’s surprising is that those contours don’t seem altogether different from those of the pre-ebook world. Much has changed, but a lot hasn’t.

Just a few years ago, when digital book sales were exploding and print sales slumping, it seemed a given that the ebook would do to the printed book what the MP3 did to the compact disc: obliterate it, or at least marginalize it. We were fated to see, in short order, the ebook become the dominant form of the book. The Gutenberg era would, after nearly half a millennium, come to a close.

I remember back in 2010 seeing an interview with Nicholas Negroponte, the founder of MIT’s Media Lab, in which he predicted, with supreme confidence, that printed books would be dead in five years. By 2015, ebooks would have taken over.

That prediction has turned out to be crazy. It’s safe to say that in 2015 plenty of people will be buying and reading printed books — considerably more than will be buying and reading electronic books. But the prediction didn’t seem entirely outlandish when Negroponte made it. Ebooks were on a tear in 2010. Sales more than tripled during the course of that year, after having already tripled in 2009, and they’d go on to double in 2011. Tripling, tripling, doubling: that’s enormous growth, even when starting from a small base. It was in May of 2011 — almost exactly three years ago — that Amazon announced that Kindle books were outselling print books on its site.

For lovers of the page, like myself, the ebook juggernaut provoked great unease about the future of the printed book — a bulwark of culture seemed to be crumbling. For lovers of the digital, like Negroponte, the same phenomenon provoked great euphoria — a bulwark of culture seemed to be crumbling. The way you see a bulwark depends on which side of it you’re on.

But even back then, there was something that had me scratching my head: the sales reports I was getting on my own books — nonfiction books — didn’t match up with everything I was hearing. There was a disconnect between the hype and the numbers. I had definitely seen a sizable bump in digital sales, but it was far from a takeover. For every ebook I was selling, I was selling about eight printed books, hardcover and paperback combined. So ebooks represented somewhere between 10 and 15 percent of my sales. That’s a healthy percentage — and I was grateful for it, given that the royalty on an ebook is considerably higher than for a paperback — but it was far from a dominant percentage.

What was even more curious was that the ebook share wasn’t growing much. After shooting up, it seemed to have quickly stabilized at around that 10 to 15 percent mark — and that’s pretty much where it still is. I don’t think I’m an outlier. Other nonfiction writers I’ve talked to say their ebook share of sales falls into the 10 to 20 percent range. Occasionally, for a particularly popular new book, the share will reach up into the twenties, but that seems fairly rare.

The apparent discrepancy no longer seems like a mystery. As the book market has settled down over the last two years, a new equilibrium has established itself. The growth in ebook sales has not just slowed, as it was fated to — as the law of big numbers tells us, you can only double or triple sales for so long before you run out of room — it has flattened out. Ebook sales growth has begun to track the overall growth rate of the market. The ebook market has matured, in other words, and it represents, depending on whose figures you look at, between 20 and 30 percent of the entire U.S. market.

Rather than wilting in the face of the ebook onslaught, sales of printed books have actually held up pretty well. Sales have fallen only modestly overall, and hardcover sales seem remarkably robust.

Don’t get me wrong. The ebook success story is a remarkable one. Ebooks have become a large, vibrant, and essential part of the book market. But they haven’t taken over. Neither fear nor euphoria seems in order anymore.

The ebook revolution, I would argue, isn’t much of a revolution. The book market has not been transformed in the way the music market has. The landscape still looks familiar.

What we’re discovering, in economic terms, is that the ebook is not a substitute, or replacement, for the printed book, as so many have either feared or hoped. Rather the ebook, like the audiobook before it, if on a different scale, is a complement to the printed book. Each form has its strengths and its weaknesses, each has its place. There are many people who have decided that they prefer reading books on screens. There are plenty more who have decided that they’ll stick with ink on paper. Still others are happy to switch between the formats, reading ebooks while scrunched into a plane seat, say, and reading print copies when sprawled on the couch at home.

Beyond the differences in personal preferences, sales breakdowns suggest that ebooks are well suited to certain kinds of reading — light fiction, for instance — but less well suited to other kinds of reading, such as literary fiction and nonfiction. That seems to be why mass-market paperbacks have taken a particular hit recently, while hardcovers and trade paperbacks have shown resilience.

These differences aren’t just a matter of the age of the reader. It’s not that older people are clinging desperately to print, while younger people are embracing digital. The average age of the print book buyer is 42; the average age of the ebook buyer is 41. Kids still like to read printed books, and surveys show that students prefer printed textbooks over electronic ones by a wide margin. No massive generational shift is under way.

For publishers, as for readers, the new equilibrium in the market has turned out to be a happy one. While ebooks have cannibalized some paperback sales, they’ve also brought new readers into the book market and expanded the purchases made by some existing readers. Many of the books that have been sold in digital form would not have been sold in print. Giving people more choice in how they read and buy books means that, other things being equal, they’ll probably read and buy more books. The fact that ebooks carry attractive profit margins provides a further bonus to publishers (though how that added margin will ultimately come to be divvied up remains very much in doubt).

I think it’s a happy equilibrium for writers, too. I’ve already mentioned that the royalties on ebooks are considerably more attractive than those on paperbacks. So as long as the hardcover market holds up, as it’s been doing, we’ll do okay. And for writers who haven’t had luck landing an agent or a publisher, self-published ebooks provide a new route to getting their work into the broad marketplace. That’s a good thing. Professional and independent publishing, which are often themselves portrayed as antagonistic, can and should be complementary. There’s plenty of room for both.

That’s the good news. I wish it were all the news. But it’s not.

The bad news is that there remains a fundamental and destructive tension between what I’ll call the culture of the book and the culture of the computer, and the ebook, lying between the two sides, is being pulled in both directions. Yes, you can read a book on a computer screen, but that doesn’t mean that the computer is a friend to the book. Book reading has never fit all that well into the world of mass media, and it fits even less well into the world of mass digital media. The book has become a countercultural object. To read a book today is to swim against society’s current.

The mind with which we read a book is very different from the mind with which we navigate our everyday lives. In our day-to-day routines, we’re always trying to manipulate or influence or otherwise act on our surroundings, whether it’s by turning a car’s steering wheel or frying an egg or tapping a button on a smartphone or tweeting a tweet. But when we open a book, our expectations and attitudes change. Because, as the University of Florida’s Norman Holland puts it, we understand that “we cannot or will not change the work of art by our actions,” we’re relieved of our desire to exert an influence over objects and people. As a result, we can “disengage our [cognitive] systems for initiating actions.” That disengagement from the busy world frees us to become absorbed in the act of reading. It’s only when we leave behind the incessant busyness of our lives in society that we open ourselves to a book’s power, that we become book readers.

That doesn’t mean that reading is anti-social. The central subject of literature is society, and when we lose ourselves in a book we often receive an education in the subtleties of human relations. Some studies suggest that reading tends to make us at least a little more empathetic, a little more alert to the inner lives of others. We retreat into a book to connect more deeply in the world outside the book.

If you have a smartphone or a tablet or a laptop with you this morning — and I’d be aghast if any of you didn’t — you know very well that the computer is not a tool for removing yourself from the busyness of your lives. It’s a tool for plunging you more deeply into the whirlpool. It’s a technology of action, reaction, and distraction, not a technology of repose and reflection.

If publishers have made a mistake in dealing with the rise of ebooks, it lies in ceding to internet and computer companies power over the formats and the sales of electronic books. Those companies may be necessary and valuable business partners, but their interests are not the interests of those who write, publish, lend, and read books. Their interests, and their profits, lie in promoting the culture of the computer, which means chipping away at the culture of the book. When a person is engrossed in a book, that person is not feeding data and money into the coffers of internet firms.

We see the divergence of interests not only in ugly battles over buy buttons and advance orders. We see it in the dramatic shift away from dedicated e-readers, like the early Kindle and Nook, to multitasking tablets like the iPad and the Fire. A dedicated e-reader is a relatively calm medium, one that suits and promotes deep reading. The multifunctional tablet does the opposite. It’s designed for busyness. If the dedicated e-reader brought the computer into the culture of the book, the tablet drags the book into the culture of the computer.

As one once-enthusiastic reader of ebooks recently told me, “When I sat down with my old Kindle, I thought about books. When I turn on my Kindle Fire, I think about everything but books.”

The great challenge for publishers and librarians and writers today is to defend the culture of the book, whether the book manifests itself in pages or in pixels. Defending the culture of the book means not giving way to the culture of the computer, the culture of busyness and distraction. It means protecting the repose of the reader. It means resisting the urge to “enhance” the book by bringing new software functionality into its pages. And it means fighting the forces of hegemony in formats, in devices, and in retailing. This is a power struggle, as much cultural as financial, and it’s going to go on for a long time.

Whatever the dreams that people like Jeff Bezos and Larry Page and Mark Zuckerberg dream, they are not the dreams of readers.

Thank you.

Image: “A Kind of Regression,” by Ines Seidel.


Filed under Uncategorized