Category Archives: Uncategorized

The robots we deserve

“Your domestic problems are completely solved.” So says a robotics technician to a grateful housewife in “Leave It to Roll-Oh,” a promotional film produced by the Chevrolet Motor Company for the 1939 New York World’s Fair. The titular star of the picture, a “chromium-plated butler,” is an ambulatory automaton that looks like a beefy version of the tin man from The Wizard of Oz. Operated by remote control, the contraption can be commanded to perform various chores at the push of a button: Clean House, Get Dinner, Wash Dishes, Fix Furnace.

Although “just a daydream,” as the movie’s narrator comes to reveal, Roll-Oh personified the common conception of a household robot. From the moment we first imagined having mechanical servants at our beck and call, we’ve assumed they would be constructed in our own image. Outfitted with arms and legs, heads and torsos, they would perform everyday tasks that we’d otherwise have to do ourselves. From The Jetsons’ indefatigable maid Rosie, to the officious droid C-3PO in Star Wars, to Westworld’s tortured “host” Dolores Abernathy, the robotic helpmates of popular culture have been humanoid in form and function.

I’s time to revise our assumptions. A robot invasion of our homes is under way, but the machines — so-called smart speakers like Amazon Echo, Google Home, and the forthcoming Apple HomePod — look nothing like what we anticipated. Small, squat, and stationary, they resemble vases or cat-food tins more than they do people. Echo and its ilk do, however, share an important trait with their imaginary forebears: They illuminate the times. Whatever their shape, robots tell us something vital about our technologies and ourselves.

Smart speakers have been around just three years, but they already have a hold on us. Powered by “chatbots” like Siri and Alexa, the devices are in the midst of a sales boom. Some 35 million Americans now use the diminutive, talking computers — more than twice the number of just a year ago, according to estimates by eMarketer — and analysts predict sales will continue to surge in the coming months. Google just expanded its Home line, and Microsoft, Samsung, Facebook, and China’s Alibaba are all expected to enter the market soon.

The allure of the gadgets is obvious. Smart speakers are oracles of the countertop. They may not speak for the gods, but they do deliver useful reports on news, traffic, and weather. And they have other talents that their Delphic ancestor couldn’t even dream of. They can serve as DJs, spinning playlists of everything from blue-eyed soul to British grime. They can diagnose ailments and soothe anxieties. They can summon taxis and order pizzas. They can read bedtime stories to toddlers. They can even bark like a watchdog to scare off burglars. And they promise to be the major-domos of home automation, adjusting lights and thermostats, controlling appliances, and issuing orders to specialized robots like the Roomba vacuum cleaner.

Still, if you were looking forward to having a Rosie scurrying around your abode, feather duster in hand, an Echo feels like a letdown. It just sits there.

There are good reasons the domestic robot has taken such an uninspiring shape. Visualizing a nimble, sure-footed android is easy, but building one is hard. As Carnegie Mellon professor Illah Nourbakhsh explains in his book Robot Futures, it requires advances not only in artificial intelligence but in the complex hardware systems required for movement, perception, and dexterity. The human nervous system is a marvel of physical control, able to sense and respond fluidly to an ever-changing environment. Just maintaining one’s balance when standing upright entails a symphony of neural signals and musculoskeletal adjustments, almost all of which take place outside conscious awareness.

Achieving that kind of agility with silicon and steel lies well beyond the technical reach of today’s engineers. Despite steady progress in all fields of robotics, even the most advanced of today’s automatons still look and behave like parodies of human beings. They get flustered by mundane tasks like loading a dishwasher or dusting a shelf of knickknacks, never mind cooking a meal or repairing a furnace. As for multitalented robots able to shift flexibly among an array of everyday tasks: they remain science-fiction fantasies. Roll-Oh is still a no-go.

Meanwhile, thanks to rapid gains in networking, natural language processing, and miniaturization, it’s become simple to manufacture small, cheap computers that can understand basic questions and commands, gather and synthesize information from online databanks, and control other electronics. The technology industry has enormous incentives to promote such gadgets. Now that many of the biggest tech firms operate like media businesses, trafficking in information, they’re in a race to create new products to charm and track consumers. Smart speakers provide a powerful complement to smartphones in this regard. Equipped with sensitive microphones, they serve as in-home listening devices — benign-seeming bugs — that greatly extend the companies’ ability to monitor the habits and needs of individuals. Whenever you chat with a smart speaker, you’re disclosing valuable information about your routines and proclivities.

Beyond the technical and commercial challenges, there’s a daunting psychological barrier to constructing and selling anthropomorphic machines. No one has figured out how to bridge what computer scientists term the “uncanny valley” — the wide gap we sense between ourselves and imitations of ourselves. Because we humans are such social beings, our minds are exquisitely sensitive to the expressions, gestures, and manners of others. Any whiff of artificiality triggers revulsion. Humanoid robots seem creepy to us, and the more closely they’re designed to mimic us, the creepier they become. That puts roboticists in a bind: the more perfect their creations, the less likely we’ll want them in our homes. Lacking human features, smart speakers avoid the uncanny valley altogether.

Although they may not look like the robots we expected, smart speakers do have antecedents in our cultural fantasy life. The robot they most recall at the moment is HAL, the chattering eyeball in Stanley Kubrick’s sci-fi classic 2001: A Space Odyssey. But their current form — that of a standalone gadget — is not likely to be their ultimate form. They seem fated to shed their physical housing and turn into a sort of ambient digital companion. Alexa will come to resemble Samantha, the “artificially intelligent operating system” that beguiles the Joaquin Phoenix character in the movie Her. Through a network of tiny speakers, microphones, and sensors scattered around our homes, we’ll be able to converse with our solicitous AI assistants wherever and whenever we like.

Facebook founder and CEO Mark Zuckerberg spent much of last year programming a prototype of such a virtual agent. In a video released in December, he gave a demo of the system.  Walking around his Silicon Valley home, he conducted a running dialogue with his omnipresent chatbot, calling on it to supply him with a clean t-shirt and toast bread for his breakfast, play movies and music, and entertain his infant daughter Max in her crib. Hooked up to outside cameras with facial-recognition software, the digitized Jeeves also acted as a sentry for the Zuckerberg compound, screening visitors and unlocking the gate.

Whether real or fictional, robots hold a mirror up to society. If Rosie and Roll-Oh embodied a twentieth-century yearning for domestic order and familial bliss, smart speakers symbolize our own, more self-absorbed time.

It seems apt that, as we come to live more of our lives virtually, through social networks and other simulations, our robots should take the form of disembodied avatars dedicated to keeping us comfortable in our media cocoons. Even as they spy on us, the gadgets offer sanctuary from the unruliness of reality, with its frictions and strains. They place us in an artificial world meticulously arranged to suit our bents and biases, a world that understands us and shapes itself automatically to our desires. Amazon’s decision to draw on classical mythology in naming its smart speaker was a masterstroke. Every Narcissus deserves an Echo.

This essay appeared originally, in a slightly shorter form and under the headline “These Are Not the Robots We Were Promised,” in the New York Times.

What they have wrought

Paul Lewis has a sharp, ominous article in this weekend’s Guardian about the misgivings some prominent Silicon Valley inventors are feeling over what they’ve created. Alumni from Google, Twitter, and Facebook worry that the products they helped design and market are having dire side effects, creating a society of compulsive, easily manipulated screen junkies.

Lewis describes how seemingly small design elements ended up having big effects on people’s behavior, from Facebook’s introduction of the Like button (a little dose of “social affirmation” that proved addictive to sender and receiver alike) to the company’s decision to switch its notification icon from the color blue to the color red (turning it from an unobtrusive reminder to an eye-grabbing “alarm signal”). Both the Like button and the red notification icon have become standards in social media apps.

Most illuminating is the story of the downward-swipe gesture used to refresh a feed. It was invented by Loren Brichter for his Tweetie service in 2009 and was adopted by Twitter when the company acquired Tweetie a year later. The “pull-to-refresh” feature has now become ubiquitous. But that raises a question: why does the gesture continue to be used now that it’s easy for social media companies to refresh their feeds automatically? The answer is that the tactile gesture is more seductive. Explains Lewis:

Brichter says he is puzzled by the longevity of the feature. In an era of push notification technology, apps can automatically update content without being nudged by the user. “It could easily retire,” he says. Instead it appears to serve a psychological function: after all, slot machines would be far less addictive if gamblers didn’t get to pull the lever themselves. Brichter prefers another comparison: that it is like the redundant “close door” button in some elevators with automatically closing doors. “People just like to push it.” …

“Smartphones are useful tools,” he says. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.”

Seemingly benign design tweaks turned into “psychologically manipulative” features because they were introduced into businesses that make their money by encouraging compulsive behavior. The more we poke and stroke the screen, the more data the companies collect and the more ads they dispense. Whatever the Like button started out as, it was quickly recognized to be the engine of a powerful feedback loop through which social media companies could track their users and monetize the resulting data. “There’s no ethics,” former Googler Tristan Harris tells Lewis.

Even the prominent venture capitalist Roger McNamee, an early investor in Google and Facebook, is feeling remorse:

[McNamee] identifies the advent of the smartphone as a turning point, raising the stakes in an arms race for people’s attention. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.” …

McNamee chooses his words carefully. “The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.” … But McNamee worries the behemoths he helped build may already be too big to curtail.

Lewis’s article happened to appear on the same day as my Wall Street Journal essay “How Smartphones Hijack Our Minds.” It’s a telling coincidence, I think, that the headline on Lewis’s piece is so similar: “‘Our Minds Can Be Hijacked’: The Tech Insiders Who Fear a Smartphone Dystopia.” It’s been clear for some time that smartphones and social-media apps are powerful distraction machines. They routinely divide our attention. But the “hijack” metaphor — I took it from Adrian Ward’s article “Supernormal” — implies a phenomenon different and more insidious than simple distraction. To hijack something is to seize control of it from its rightful owner. What’s up for grabs is your mind.

“How Smartphones Hijack Our Minds”: sources

I draw on several studies in my Wall Street Journal essay “How Smartphones Hijack Our Minds.” Here are citations and links for anyone who would like to delve more deeply into the subject.

Three articles written or cowritten by Adrian Ward, formerly at the University of Colorado at Boulder and now at the University of Texas at Austin, were particularly valuable:

Ward, Duke, Gneezy, Bos, “Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity,” Journal of the Association for Consumer Research, 2017.

Ward, “Supernormal: How the Internet Is Changing Our Memories and Our Minds,” Psychological Inquiry, 2013.

Wegner, Ward, “How Google Is Changing Your Brain,” Scientific American, 2013.

Other studies cited, in the order mentioned:

Stothart, Mitchum, Yehnert, “The Attentional Cost of Receiving a Cell Phone Notification,” Journal of Experimental Psychology: Human Perception and Performance, 2015.

Clayton, Leshner, Almond, “The Extended iSelf: The Impact of iPhone Separation on Cognition, Emotion, and Physiology,” Journal of Computer-Mediated Communication, 2015.

Thornton, Faires, Robbins, Rollins, “The Mere Presence of a Cell Phone May Be Distracting: Implications for Attention and Task Performance,” Social Psychology, 2014. (I refer in particular to the second of two experiments described in this paper.)

Lee, Kim, McDonough, Mendoza, Kim, “The Effects of Cell Phone Use and Emotion-Regulation Style on College Students’ Learning,” Applied Cognitive Psychology, 2017.

Beland, Murphy, “Ill Communication: Technology, Distraction & Student Performance,” Labour Economics, 2016.

Przybylski, Weinstein, “Can You Connect with Me Now? How the Presence of Mobile Communication Technology Influences Face-to-Face Conversation Quality,” Journal of Social and Personal Relationships, 2013.

Misra, Cheng, Genevie, Yuan, “The iPhone Effect: The Quality of In-Person Social Interactions in the Presence of Mobile Devices,” Environment and Behavior, 2016.

Sparrow, Liu, Wegner, “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips,” Science, 2011.

William James’s observation that “the art of remembering is the art of thinking” comes from a lecture collected in the book Talks to Teachers on Psychology and to Students on Some of Life’s Ideals.

Cynthia Ozick’s reference to data as “memory without history” can be found in her essay “T.S. Eliot at 101,” published in the New Yorker in 1989.

Finally, at the start of the essay, I refer to Apple data showing that the average iPhone owner uses the device 80 times a day. This was disclosed in an Apple security presentation by Ivan Krstić last year. The figure refers to the number of times a device is unlocked during a day. Since it’s possible to check notifications without unlocking the phone, the figure likely understates the number of times people actually look at their phones during the day.

The amazing, mind-eating smartphone

In “How Smartphones Hijack Our Minds,” an essay in the Weekend Review section of the Wall Street Journal, I examine recent research into the ways smartphones influence our cognition and perception — even when we’re not using the devices.

Here’s a taste:

Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object in the environment that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.” Media and communication devices, from telephones to TV sets, have always tapped into this instinct. Whether turned on or switched off, they promise an unending supply of information and experiences. By design, they grab and hold our attention in ways natural objects never could.

But even in the history of captivating media, the smartphone stands out. It’s an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what [Adrian] Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it’s part of the surroundings — which it always is. Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library, and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That’s what a smartphone represents to us. No wonder we can’t take our minds off it.

Read on.

Image: Modes Rodriguez.

Mattel and Google: a double standard for AI toys?

Mattel yesterday pulled the plug on Aristotle, a planned smart-speaker-cum-baby-monitor developed by the company’s Nabi unit. The product had generated controversy since it was announced in January, with lawmakers, pediatricians, and child advocates raising concerns about how the device would collect data on and influence the behavior of children. The Washington Post summed up the concerns:

For one, the existence of a home hub for kids raised questions about data privacy for a vulnerable population. It also triggered broader concerns about how quickly companies are marketing products to parents without understanding how technology could affect early childhood development.

Congressmen Edward Markey and Joe Barton fired off a letter to Mattel CEO (and former Google exec) Margaret Georgiadis last week suggesting that Aristotle raises “serious privacy concerns as Mattel can build an in-depth profile of children and their family. It appears that never before has a device had the capability to so intimately look into the life of a child.” The letter sparked a new round of criticism in the press, with Jezebel calling Aristotle “creepy as hell” and Buzzfeed quoting a child privacy advocate arguing, “We shouldn’t be using kids as AI experiments … If we don’t know what the effect is, then we shouldn’t be putting that in children’s bedrooms.”

The letter and its fallout seem to be what prompted Mattel to announce yesterday that it wouldn’t go forward with the device.

At the very same moment Mattel was killing off Aristotle, Google was promoting new “kid friendly” accounts for its line of Google Home smart speakers:

We’re making Google Home more fun for the whole family, with 50+ new experiences for you to try out. Learn something new, or imagine with storytime. There are also plenty of fun activities; go on an adventure with Mickey Mouse, identify your alter ego with the Justice League D.C. Super heroes, or play Freeze Dance in your living room. These experiences will be supported by Family Link accounts on the Assistant, letting parents create accounts for their children under 13.

Even infants and toddlers can now be registered for Google accounts, allowing the company’s AI chatbot, Assistant, to collect data on them, talk with them, and tailor experiences for them. “We automatically collect and store certain information about the services your child uses and how your child uses them,” Google notes, deep in its privacy policies, “like when your child saves a picture in Google Photos, enters a query in Google Search, creates a document in Google Drive, talks to the Google Assistant, or watches a video in YouTube Kids.”

It’s hard for me to see much difference between Aristotle and Google Home with Family Link. Both raise concerns about children’s privacy, both allow companies to develop in-depth profiles of kids and their families, and both entail “using kids as AI experiments” without any clear understanding of how their development will be affected. Yet while the press hammered Mattel, it treated the Google news as benign, if not praiseworthy. “Google is making Home better for families and kids,” declared TechCrunch. “Google Assistant will tell your kids a bedtime story,” wrote Engadget. Buzzfeed chirped:

Home is now more kid-friendly, too. It can understand the way kids talk better, and includes more kid-friendly games, like “Which fruit are you?”. New commands include: “Hey Google, let’s learn”, or “let’s play a game,” or “tell me a story.” Google is also partnering with Disney to create kid-first experiences.

When a toy company tries to put a listening device into a kid’s bedroom, it’s creepy. When a tech giant does the same thing, it’s cool.

Facebook Rules

When television emerged as a fledgling medium in the middle years of the last century, it already had, in the form of the Federal Communications Commission, the Communications Act of 1934, and various other laws and precedents, a framework for regulating its content. The formal restrictions on the broadcasting of obscene, indecent, profane, prurient, and violent material, combined with the sensitivities of mainstream advertisers, defined the boundaries of Prime Time television through the fifties, sixties, and much of the seventies — until the spread of cable programming changed everything.

When the internet emerged as a medium in the 1990s, it was free of any such regulatory framework restricting its content. Indeed, an anything-goes ethos was as essential to the nature and ideals of the net as the family-friendly ethos was to the nature and ideals of TV during its formative decades. The net, in other words, escaped the sanitized Prime Time phase.

Or did it?

Today, Facebook released a set of “content guidelines for monetization” that might have been written by FCC bureaucrats in the 1950s. Among other things, the Facebook rules prohibit or restrict:

  • “Content that depicts family entertainment characters engaging in violent, sexualized, or otherwise inappropriate behavior, including videos positioned in a comedic or satirical manner.”
  • “Content that focuses on real world tragedies, including but not limited to depictions of death, casualties, physical injuries, even if the intention is to promote awareness or education.”
  • “Content that is incendiary, inflammatory, demeaning or disparages people, groups, or causes.”
  • “Content that is depicting threats or acts of violence against people or animals, [including] excessively graphic violence in the course of video gameplay.”
  • “Content where the focal point is nudity or adult content, including depictions of people in explicit or suggestive positions, or activities that are overly suggestive or sexually provocative.”
  • “Content that features coordinated criminal activity, drug use, or vandalism.”
  • “Content that depicts overly graphic images, blood, open wounds, bodily fluids, surgeries, medical procedures, or gore that is intended to shock or scare.”
  • “Content depicting or promoting the excessive consumption of alcohol, smoking, or drug use.”
  • “Inappropriate language.”

I’m not sure Petticoat Junction would have made it through that gauntlet.

Rather than being imposed by government fiat, Facebook is imposing these content restrictions on itself in response to growing public concerns about the net’s anything-goes ethos and, in particular, to advertisers’ growing worries about what Facebook VP Carolyn Everson terms “brand safety.” The fact that the rules allow little or no room for editorial judgment — is this image exploitative or journalistic? — reveals what happens when a tech firm becomes a media hub.

Some will welcome the sweeping new restrictions on content. Others will be appalled. What they make clear, though, is that the internet, as most experience it, has entered a new era, spurred by the consolidation of traffic into a handful of sites and apps run by companies whose fortunes hinge on their ability to keep advertisers happy. The internet is reliving the history of television, but in reverse. First came Anything Goes. Now comes Prime Time.

Image: George Carlin, circa 1972.

The internet as an innocent fraud

The paperback edition of Utopia Is Creepy is out today, September 12, from W. W. Norton & Company. Collecting seventy-nine of the best posts from Rough Type as well as sixteen essays and reviews I published between 2008 and 2016, the book, says Time, “punches a hole in Silicon Valley cultural hubris.”

Here’s an excerpt from the introduction:

“The most unfree souls go west, and shout of freedom.”
–D. H. Lawrence, Studies in Classic American Literature

The greatest of America’s homegrown religions — greater than Jehovah’s Witnesses, greater than the Church of Jesus Christ of Latter-Day Saints, greater even than Scientology — is the religion of technology. John Adolphus Etzler, a Pittsburgher, sounded the trumpet in his 1833 testament The Paradise within the Reach of All Men. By fulfilling its “mechanical purposes,” he wrote, the United States would turn itself into a new Eden, a “state of superabundance” where “there will be a continual feast, parties of pleasures, novelties, delights and instructive occupations,” not to mention “vegetables of infinite variety and appearance.”

Similar predictions proliferated throughout the nineteenth and twentieth centuries, and in their visions of “technological majesty,” as the critic and historian Perry Miller wrote, we find the true American sublime. We may blow kisses to agrarians like Jefferson and tree-huggers like Thoreau, but we put our faith in Edison and Ford, Gates and Zuckerberg. It is the technologists who shall lead us.

The internet, with its disembodied voices and ethereal avatars, seemed mystical from the start, its unearthly vastness a receptacle for America’s spiritual yearnings and tropes. “What better way,” wrote Cal State philosopher Michael Heim in 1991, “to emulate God’s knowledge than to generate a virtual world constituted by bits of information?” In 1999, the year Google moved from a Menlo Park garage to a Palo Alto office, the Yale computer scientist David Gelernter wrote a manifesto predicting “the second coming of the computer,” replete with gauzy images of “cyberbodies drift[ing] in the computational cosmos” and “beautifully-laid-out collections of information, like immaculate giant gardens.” The millenarian rhetoric swelled with the arrival of Web 2.0. “Behold,” proclaimed Kevin Kelly in an August 2005 Wired cover story: We are entering a “new world,” powered not by God’s grace but by the web’s “electricity of participation.” It would be a paradise of our own making, “manufactured by users.” History’s databases would be erased, humankind rebooted. “You and I are alive at this moment.”

The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, venture capitalist Marc Andreessen sent out a rhapsodic series of tweets — he called it a “tweetstorm” — announcing that computers and robots were about to liberate us all from “physical need constraints.” Echoing John Adolphus Etzler (and also Karl Marx), he declared that “for the first time in history” humankind would be able to express its full and true nature: “We will be whoever we want to be. The main fields of human endeavor will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.” The only thing he left out was the vegetables.

Such prophesies might be dismissed as the prattle of overindulged rich guys, but for one thing: They’ve shaped public opinion. By spreading a utopian view of technology, a view that defines progress as essentially technological, they’ve encouraged people to switch off their critical faculties and give Silicon Valley entrepreneurs and financiers free rein in remaking culture to fit their commercial interests. If, after all, the technologists are creating a world of superabundance, a world without work or want, their interests must be indistinguishable from society’s. To stand in their way, or even to question their motives and tactics, would be self-defeating. It would serve only to delay the wonderful inevitable.

The Silicon Valley line has been given an academic imprimatur by theorists from universities and think tanks. Intellectuals spanning the political spectrum, from Randian right to Marxian left, have portrayed the computer network as a technology of emancipation. The virtual world, they argue, provides an escape from repressive social, corporate, and governmental constraints; it frees people to exercise their volition and creativity unfettered, whether as entrepreneurs seeking riches in the marketplace or as volunteers engaged in “social production” outside the marketplace. “This new freedom,” wrote law professor Yochai Benkler in his influential 2006 book The Wealth of Networks, “holds great practical promise: as a dimension of individual freedom; as a platform for better democratic participation; as a medium to foster a more critical and self-reflective culture; and, in an increasingly information-dependent global economy, as a mechanism to achieve improvements in human development everywhere.” Calling it a revolution, he went on, is no exaggeration.

Benkler and his cohorts had good intentions, but their assumptions were bad. They put too much stock in the early history of the web, when its commercial and social structures were inchoate, its users a skewed sample of the population. They failed to appreciate how the network would funnel the energies of the people into a centrally administered, tightly monitored information system organized to enrich a small group of businesses and their owners.

The network would indeed generate a lot of wealth, but it would be wealth of the Adam Smith sort—and it would be concentrated in a few hands, not widely spread. The culture that emerged on the network, and that now extends deep into our lives and psyches, is characterized by frenetic production and consumption — smartphones have made media machines of us all — but little real empowerment and even less reflectiveness. It’s a culture of distraction and dependency. That’s not to deny the benefits of having easy access to an efficient, universal system of information exchange. It is to deny the mythology that has come to shroud the system. And it is to deny the assumption that the system, in order to provide its benefits, had to take its present form.

Late in his life, the economist John Kenneth Galbraith coined the term “innocent fraud.” He used it to describe a lie or a half-truth that, because it suits the needs or views of those in power, is presented as fact. After much repetition, the fiction becomes common wisdom. “It is innocent because most who employ it are without conscious guilt,” Galbraith wrote. “It is fraud because it is quietly in the service of special interest.” The idea of the computer network as an engine of liberation is an innocent fraud.

A longer excerpt from the introduction appears in Aeon. The paperback edition of Utopia Is Creepy is available now at your local independent bookstore or from Barnes & Noble, Amazon, or Powell’s.