The unbitten Apple

iPad: Apple Logo

Apple’s logo remains one of the great business marks: simple, eloquent, indelible. But, more and more, the stylized apple with the bite missing is looking like an anachronism.

The colors went long ago, of course, those happy, trippy rainbow stripes that connected the company with its flower-child origins. But the bite remained, signifying and celebrating the human: organic, flawed, sweet. The missing piece was our piece. It welcomed us in. It made us part of the company. It put the personal in personal computing. It familiarized the machine.

At some point, though, Apple lost patience with us, with our fiddling, our ineptitude. It began to see the bite as a wound, and ever since it has been seeking to heal the damage. Apple wants to be pristine, untouched by outside forces, entire onto itself.

Apple’s ideal now is the unbitten apple, the immaculate fruit.

One by one the portals go, the entrances and exits are sealed. The customer is locked out of the device — and locked into the “ecosystem.” Today, in a media ceremony, it was the headphone jack that was exorcised. That tiny analog orifice linking the iPhone back to the transistor radio, the Walkman, the iPod: gone. And why not? With the socket and its audio converter removed, the phone will become even slimmer, even lighter, even more elegant. It will be better insulated against the elements. It will be more totemic. It will be purer.

Removing the headphone jack was an act of “courage” on Apple’s part, explained the company’s marketing chief, Phil Schiller: “The courage to move on and do something new that betters all of us.”

With just the one port remaining, the proprietary, tightly guarded Lightning port, the iPhone is getting very close now to the ideal. Apple will soon be whole.

Transhumanism merges tool-making and myth-making

wings

What will we make of ourselves? The question is no longer figurative; it’s literal. Biotechnology and genetic engineering are giving us new tools to reshape ourselves, at both the individual and the species level. My new book Utopia Is Creepyout today — concludes with an essay, “The Daedalus Mission,” that tries to make sense of radical human enhancement, or “transhumanism.” Aeon is featuring excerpts from the essay (mainly the beginning and the ending). Here’s a little more, from the middle:

Transhumanism is “an extension of humanism,” argues Nick Bostrom, an Oxford philosophy professor who has been one of the foremost proponents of radical human enhancement. “Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as ‘human.’” The ultimate benefit of transhumanism, in Bostrom’s view, is that it expands “human potential,” giving individuals greater freedom “to shape themselves and their lives according to their informed wishes.” Transhumanism unchains us from our nature.

Other transhumanists take a subtly different tack in portraying their beliefs as part of the humanistic tradition. They suggest that the greatest benefit of radical enhancement is not that it allows us to transcend our deepest nature but rather to fulfill it. “Self-reconstruction” is “a distinctively human activity, something that helps define us,” writes Duke University bioethicist Allen Buchanan in his book Better Than Human. “We repeatedly alter our environment to suit our needs and preferences. In doing this we inevitably alter ourselves as well. The new environments we create alter our social practices, our cultures, our biology, and even our identity.” The only difference now, he says, “is that for the first time we can deliberately, and in a scientifically informed way, change our selves.” We can extend the Enlightenment into our cells.

Critics of radical human enhancement, often referred to as bioconservatives, take the opposite view, arguing that transhumanism is antithetical to humanism. Altering human nature in a fundamental way, they contend, is more likely to demean or even destroy the human race than elevate it. Some of their counterarguments are pragmatic. By tinkering with life, they warn, researchers risk opening a Pandora’s box, inadvertently unleashing a biological or environmental catastrophe. They also caution that access to expensive enhancement procedures and technologies is likely to be restricted to economic or political elites. Society may end up riven into two classes, with the merely normal masses under the bionic thumbs of an oligarchy of supermen. They worry, too, that as people gain prodigious intellectual and physical abilities, they’ll lose interest in the very activities that bring pleasure and satisfaction to their lives. They’ll suffer “self-alienation,” as New Zealand philosopher Nicholas Agar puts it.

But at the heart of the case against transhumanism lies a romantic belief in the dignity of life as it has been given to us. There is an essence to humankind, bioconservatives believe, from which springs both our strengths and our defects. Whether bestowed by divine design or evolutionary drift, the human essence should be cherished and protected as a singular gift, they argue. “There is something appealing, even intoxicating, about a vision of human freedom unfettered by the given,” writes Harvard professor Michael J. Sandel in The Case against Perfection. “But that vision of freedom is flawed. It threatens to banish our appreciation of life as a gift, and to leave us with nothing to affirm or behold outside our own will.” As a counter to what they see as misguided utilitarian utopianism, bioconservatives counsel humility.

The transhumanists and the bioconservatives are wrestling with the largest of questions: Who are we? What is our destiny? But their debate is a sideshow. Intellectual wrangling over the meaning of humanism and the fate of humanity is not going to have much influence over how people respond when offered new opportunities for self-expression, self-improvement, and self-transformation. The public is not going to approach transhumanism as a grand moral or political movement, a turning point in the history of the species, but rather as a set of distinct products and services, each offering its own possibilities. Whatever people sense is missing in themselves or their lives they will seek to acquire with whatever means available. And as standards of beauty, intelligence, talent, and status change, even those wary of human enhancement will find it hard to resist the general trend. Wherever they may lead us, our attempts to change human nature will be governed by human nature.

We are myth makers as well as tool makers. Biotechnology allows us to merge these two instincts, giving us the power to refashion the bodies we have and the lives we lead to more closely match those we imagine for ourselves. Transhumanism ends in a paradox. The rigorously logical work that scientists, doctors, engineers, and programmers are doing to enhance and extend our bodies and minds is unlikely to raise us onto a more rational plane. It promises, instead, to return us to a more mythical existence, as we deploy our new tools in an effort to bring our dream selves more fully into the world.

Photo: Richard Schneider.

Big data and the limits of social engineering

simcity

The following review of Alex Pentland’s book Social Physics appeared originally, in a slightly different form, in MIT Technology Review.

In 1969, Playboy published a long, freewheeling interview with Marshall McLuhan in which the media theorist and sixties icon sketched a portrait of the future that was at once seductive and repellent. Noting the ability of digital computers to analyze data and communicate messages, McLuhan predicted that the machines eventually would be deployed to fine-tune society’s workings. “The computer can be used to direct a network of global thermostats to pattern life in ways that will optimize human awareness,” he said. “Already, it’s technologically feasible to employ the computer to program societies in beneficial ways.” He acknowledged that such centralized control raised the specter of “brainwashing, or far worse,” but he stressed that “the programming of societies could actually be conducted quite constructively and humanistically.”

The interview appeared when computers were used mainly for arcane scientific and industrial number-crunching. To most readers at the time, McLuhan’s words must have sounded far-fetched, if not nutty. Now, they seem prophetic. With smartphones ubiquitous, Facebook inescapable, and wearable computers proliferating, society is gaining a digital sensing system. People’s location and behavior are being tracked as they go through their days, and the resulting information is being transmitted instantaneously to vast server farms. Once we write the algorithms needed to parse all that “big data,” many sociologists and statisticians believe, we’ll be rewarded with a much deeper understanding of what makes society tick.

One of big data’s keenest advocates is Alex “Sandy” Pentland, a data scientist who, as the director of MIT’s Human Dynamics Laboratory, has long used computers to probe the dynamics of businesses and other organizations. In his brief but ambitious book, Social Physics, Pentland argues that our greatly expanded ability to gather behavioral data will allow scientists to develop “a causal theory of social structure” and ultimately establish “a mathematical explanation for why society reacts as it does” in all manner of circumstances. As the book’s title makes clear, Pentland thinks that the social world, no less than the material world, operates according to rules. There are “statistical regularities within human movement and communication,” he writes, and once we fully understand those regularities, we’ll discover “the basic mechanisms of social interactions.”

What has prevented us from deciphering society’s mathematical underpinnings up to now, Pentland believes, is a lack of empirical rigor in the social sciences. Unlike physicists, who can measure the movements of objects with great precision, sociologists have had to make do with fuzzy observations. They’ve had to work with rough and incomplete data sets drawn from small samples of the population, and they’ve had to rely on people’s notoriously flawed recollections of what they did, when they did it, and whom they did it with. Computer networks promise to remedy those shortcomings. Tapping into the streams of data that flow through gadgets, search engines, social media, and credit-card payment systems, scientists will be able to collect precise, real-time information on the behavior of millions, if not billions, of individuals. And because computers neither forget nor fib, the information will be reliable.

To illustrate what lies in store, Pentland describes a series of experiments that he and his associates have been conducting in the private sector. They go into a business and give each employee an electronic ID card, called a “sociometric badge,” that hangs from the neck and communicates with the badges worn by colleagues. Incorporating microphones, location sensors, and accelerometers, the badges monitor where people go and whom they talk with, taking note of their tone of voice and even their body language. The devices are able to measure not only the chains of communication and influence within an organization but also “personal energy levels” and traits such as “extraversion and empathy.” In one such study of a bank’s call center, the researchers discovered that productivity could be increased simply by tweaking the coffee-break schedule.

Pentland dubs this data-processing technique “reality mining,” and he suggests that similar kinds of information can be collected on a much broader scale by smartphones outfitted with specialized sensors and apps. Fed into statistical modeling programs, the data could reveal “how things such as ideas, decisions, mood, or the seasonal flu are spread in the community.”

The mathematical modeling of society is made possible, according to Pentland, by the innate tractability of human beings. We may think of ourselves as rational actors, in conscious control of our choices, but in reality most of what we do is reflexive. Our behavior is determined by our subliminal reactions to the influence of other people, particularly those in the various peer groups we belong to. “The power of social physics,” he writes, “comes from the fact that almost all of our day-to-day actions are habitual, based mostly on what we have learned from observing the behavior of others.” Once you map and measure all of a person’s social influences, you can develop a statistical model that predicts that person’s behavior, just as you can model the path a billiard ball will take after it strikes other balls.

Deciphering people’s behavior is only the first step. What really excites Pentland is the prospect of using digital media and related tools to change people’s behavior, to motivate groups and individuals to act in more productive and responsible ways. If people react predictably to social influences, then governments and businesses can use computers to develop and deliver carefully tailored incentives, such as messages of praise or small cash payments, to “tune” the flows of influence in a group and thereby modify the habits of its members. Beyond improving the efficiency of transit and health-care systems, Pentland suggests that group-based incentive programs can enhance the harmony and creativity of communities. “Our main insight,” he reports, “is that by targeting [an] individual’s peers, peer pressure can amplify the desired effect of a reward on the target individual.” Computers become, as McLuhan foresaw, civic thermostats. They not only register society’s state but bring it into line with some prescribed ideal. Both the tracking and the maintenance of the social order are automated.

Ultimately, Pentland argues, looking at people’s interactions through a mathematical lens will free us of time-worn notions about class and class struggle. Political and economic classes, he contends, are “oversimplified stereotypes of a fluid and overlapping matrix of peer groups.” Peer groups, unlike classes, are defined by “shared norms” rather than just “standard features such as income” or “their relationship to the means of production.” Armed with exhaustive information about individuals’ habits and associations, civic planners will be able to trace the full flow of influences that shape personal behavior. Abandoning general categories like “rich” and “poor” or “haves” and “have-nots,” we’ll be able to understand people as individuals—even if those individuals are no more than the sums of all the peer pressures and other social influences that affect them.

Replacing politics with programming might sound appealing, particularly given Washington’s paralysis. But there are good reasons to be nervous about this sort of social engineering. Most obvious are the privacy concerns raised by the collection of ever more intimate personal information. Pentland anticipates such criticism by arguing that public fears about privacy can be ameliorated through a “New Deal on Data” that gives people more control over the information collected about them. It’s hard, though, to imagine Internet companies agreeing to give up ownership of the behavioral information they hoard. The data are, after all, crucial to their competitive advantage and their profit.

Even if we assume that the privacy issues can be resolved, the idea of what Pentland calls “a data-driven society” remains problematical. Social physics is a variation on the theory of behavioralism that found favor in McLuhan’s day, and it suffers from the same limitations that doomed its predecessor. Defining social relations as a pattern of stimulus and response makes the math easier, but it ignores the deep, structural sources of social ills. Pentland may be right that people’s behavior is determined in large part by the social norms and influences exerted upon them by their peers, but what he fails to see is that those norms and influences are themselves shaped by history, politics, and economics, not to mention the ever-present forces of power and prejudice. People don’t have complete freedom in choosing their peer groups. Their choices are constrained by where they live, where they come from, how much money they have, and what they look like. A statistical model of society that ignores issues of class, that takes patterns of influence as givens rather than as historical contingencies, will tend to perpetuate existing social structures and dynamics. It will encourage us to optimize the status quo rather than challenge it.

Politics is messy because society is messy, not the other way around. Pentland does a commendable job in describing how better data can enhance social planning. But like other would-be social engineers he overreaches. Letting his enthusiasm get the better of him, he begins to take the metaphor of “social physics” literally, even as he acknowledges that influence-based mathematical models will always be reductive. “Because it does not try to capture internal cognitive processes,” he writes at one point, “social physics is inherently probabilistic, with an irreducible kernel of uncertainty caused by avoiding the generative nature of conscious human thought.” What big data can’t account for is what’s most unpredictable, and most interesting, about us.

The art of Instagram

Jacob Mikanowski has, in The Point, an invigorating essay on Instagram, that most civilized of social networks. It begins:

Of all the social networks, it’s the easiest, the simplest, the least full of harm. Let’s put it a different way. Facebook is Sauron. It’s also your mom’s couch, a yoga-center bulletin board, a school bus, a television tuned to every channel. Twitter is Grub Street, a press scrum, the crowd in front of a bar. Reddit is a tin-foil hat and a sewer. Snapchat is hover boards, Rock ’em Sock ’em Robots and Saturday morning cartoons. Instagram is a garden: curated, pruned, clean and pretty. It lets you be creative, but not too creative; communicate, but without saying too much. No embedding, no links—just photos, captions and hashtags. Elegant. Simple. Twenty-three filters. A crisp square around each frame.

Instagram, the charm of which seems fated to eventual extermination by Facebook, its charmless owner, is the only social network, other than, maybe, Tumblr, that Jane Austen might have liked. It is wrapped in unstated but almost ceremonious codes of conduct. It sometimes seems, as Mikanowski points out, like “an index of mores in the age of self-branding and self-surveillance.” And in its fleeting, multitudinous images we see, as in the bowl of a smoothie-making blender, a history of visual art. “Practically every photograph of nature on Instagram,” writes Mikanowski, by way of example, “stems in one way or another from the impact of the Romantic era.”

But if Instagram is inspired by what might be termed an artistic impulse — “that need to make life itself aesthetic, to ask, over and over ‘What will this look like in a square?'” — it also subverts that impulse, drains it of its upsetting, dislocating energies: filters it, frames it, captions it. In the busy world of Facebook, Instagram is the museum.

The music of mind-fracking

unknownpleasures

I have seen the future of music, and its name is ThinkEar.

A new audio gadget from, oddly enough, a Finnish oil company named Neste, ThinkEar is a set of “mind-controlled earphones” that will allow your brain to choose the songs you listen to without any input from your thumbs or other body parts. Let’s go to the press release:

The world is poised on the brink of a technological revolution; rapid progress in brain mapping technology means that the ability to control devices with our minds is no longer the stuff of science fiction. Neste’s ThinkEar earphones are a bold entertainment concept that offers thought-controlled personal audio.

If I had listened to Gary Numan instead of Gang of Four when I was growing up, I would have seen all this shit coming. I mean, the guy was already using an Amazon Echo in 1979:

numan

Back to the press release:

Making full use of the latest developments in brain wearables, the earphone’s integrated 5 point EEG sensors are able to read your brainwaves while an integrated microcomputer translates them into interaction commands to navigate your audio content.

You know who had the nicest brain wearables? The Borg.

borg

OK, so here’s where the press release reaches its climax:

Unlike other systems, the earphones are not tethered to any external device. [They] access your favorite cloud services directly.

Which means, of course, that the cloud services will also be able to access your brainwaves directly. (Interaction is not a one-way street.) And that’s where things get really cool — you might even say numanesque. Remember when I last wrote about the future of pop? It was a year ago when Google announced the shift of its Google Play Music service from the old paradigm of listener-selected music to the new paradigm of outsourced “activity-based” music. As Google explained:

At any moment in your day, Google Play Music has whatever you need music for — from working, to working out, to working it on the dance floor — and gives you curated radio stations to make whatever you’re doing better. Our team of music experts … crafts each station song by song so you don’t have to.

ThinkEar is the missing link in mind-free listening. With your ThinkEar EEG sensors in place, Google will be able to read your brainwaves, on a moment by moment basis, and serve up an engineered set of tunes perfectly geared to your mental state as well as your activity mode. Not only will you save enormous amounts of time that you would have wasted figuring out what songs you felt like listening to, but Google will be able to use its expertly crafted soundscapes to help keep your mental state within some optimal parameters.

Far-fetched? I don’t think so. It’s basically just Shazam in reverse. The music susses you.

The applications go well beyond music. Cloud services could, for instance, beam timely notifications or warnings to your ears based on what’s going on in your brain, either at the subconscious or the conscious level. Think of what Facebook could do with that kind of capability. And if Amazon melded ThinkEar with both Echo and Audible, it could automatically intervene in your thought processes by reading you inspiring passages from pertinent books, like, say, The Fountainhead.

Maybe it’s not so odd that an oil company would invent a set of mind-reading earbuds. Once the land is tapped out, the extraction industries are going to need a new target, and what could possibly be more lucrative than fracking the human brain?

Questioning Silicon Valley

Time magazine’s Rana Foroohar says my new book, Utopia Is Creepy, “punches a hole in Silicon Valley cultural hubris.” The book comes out on September 6, the day after Labor Day, but you can read an excerpt from the introduction at Aeon today.

“Computing is not about computers any more,” wrote Nicholas Negroponte of the Massachusetts Institute of Technology in his 1995 bestseller Being Digital. “It is about living.” By the turn of the century, Silicon Valley was selling more than gadgets and software: it was selling an ideology. The creed was set in the tradition of U.S. techno-utopianism, but with a digital twist. The Valley-ites were fierce materialists – what couldn’t be measured had no meaning – yet they loathed materiality. In their view, the problems of the world, from inefficiency and inequality to morbidity and mortality, emanated from the world’s physicality, from its embodiment in torpid, inflexible, decaying stuff. The panacea was virtuality – the reinvention and redemption of society in computer code. They would build us a new Eden not from atoms but from bits. All that is solid would melt into their network. We were expected to be grateful and, for the most part, we were.

Our craving for regeneration through virtuality is the latest expression of what Susan Sontag in On Photography described as “the American impatience with reality, the taste for activities whose instrumentality is a machine.” What we’ve always found hard to abide is that the world follows a script we didn’t write. We look to technology not only to manipulate nature but to possess it, to package it as a product that can be consumed by pressing a light switch or a gas pedal or a shutter button. We yearn to reprogram existence, and with the computer we have the best means yet. We would like to see this project as heroic, as a rebellion against the tyranny of an alien power. But it’s not that at all. It’s a project born of anxiety. Behind it lies a dread that the messy, atomic world will rebel against us. What Silicon Valley sells and we buy is not transcendence but withdrawal. The screen provides a refuge, a mediated world that is more predictable, more tractable, and above all safer than the recalcitrant world of things. We flock to the virtual because the real demands too much of us.

Read on.

Solitaire as symbol and synecdoche

solitaire

“When a man is reduced to such a pass as playing cards by himself, he had better give up — or take to reading.” –Rawdon Crawley, The Card Player’s Manual, 1876

Big news out of the Googleplex today: the internet giant is offering a free solitaire game through its search engine and its mobile app. “When you search for ‘solitaire’ on Google,” goes the announcement on the company’s always breathless blog, “the familiar patience game may test yours!”

Pokémon Go, Candy Crush, Angry Birds, Farmville, Minesweeper, Space Invaders, Pong: computer games come and go, offering fleeting amusements before they turn stale.

But not solitaire. Solitaire endures.

Invented sometime in the eighteenth century, the single-player card game made a seamless leap to virtuality with the arrival of personal computers in the early 1980s. The gameplay was easy to program, and a deck of cards could be represented on even the most rudimentary of computer displays. Spectrum Holobyte’s Solitaire Royal became a huge hit when it was released in 1987. After Microsoft incorporated its own version of the game into the Windows operating system in 1990, solitaire quickly became the most used PC app of all time.

“Though on its face it might seem trivial, pointless, a terrible way to waste a beautiful afternoon, etc., solitaire has unquestionably transformed the way we live and work,” wrote Slate’s Josh Levin in 2008. “Computer solitaire propelled the revolution of personal computing, augured Microsoft’s monopolistic tendencies, and forever changed office culture.”

Google is late to the party, but it’s a party that will never end.

Microsoft had ulterior motives when it bundled solitaire into Windows — the game helped people learn how to use a mouse, and it kept them sitting in front of their Microsoft-powered computers like, to quote Iggy Pop, hypnotized chickens — and Google, too, is looking to accomplish something more than just injecting a little fun into our weary lives. “A minor move like putting games in search means that users – especially mobile users – will turn to the Google search app at a time when a lot of the information we need is available elsewhere on our devices,” reports TechCrunch.

It’s a devious game these companies play. We are but deuces in their decks.

Would it be too much of a stretch to suggest that solitaire is a perfect microcosm of personal computing, particularly now, in our social media age? In “The Psychology of Games,” a 2000 article in Psychology Review, Mark Griffiths pointed out that games are a “world-building activity.” They offer a respite from the demands of the real. “Freud was one of the first people to concentrate on the functions of playing games,” Griffiths wrote. “He speculated that game playing provided a temporary leave of absence from reality which reduced individual conflict and brought about a change from the passive to the active.” We love games because they “offer the illusion of control over destiny and circumstance.”

Solitaire, a game mixing skill and chance, also provides what psychologists call “intermittent reinforcement.” Every time a card is revealed, there is, for the player, the possibility of a reward. The suspense, and the yearning, is what makes the game so compelling, even addictive. “Basically,” wrote Griffiths, “people keep playing in the absence of a reward hoping that another reward is just around the corner.” Turning over an ace in solitaire is really no different from getting a like on Facebook or a retweet on Twitter. We crave such symbolic tokens of accomplishment, such sweet nothings.

Shuffle that deck again, Google. This time I’m going to be a winner.