Category Archives: Uncategorized

Zuckerberg’s world

The word “community” appears, by my rough count, 98 times in Mark Zuckerberg’s latest message to the masses. In a post-fact world, truth is approached through repetition. The message that is transmitted most often is the fittest message, the message that wins. Verification becomes a matter of pattern recognition. It’s the epistemology of the meme, the sword by which Facebook lives and dies.

Today I want to focus on the most important question of all: are we building the world we all want?

It’s a good question, though I’m not sure there is any world that we all want, and if there is one, I’m not sure Mark Zuckerberg is the guy I’d appoint to define it. And yet, from his virtual pulpit, surrounded by his 86 million followers, the young Facebook CEO hesitates not a bit to speak for everyone, in the first person plural. There is no opt-out to his “we.” It’s the default setting and, in Zuckerberg’s totalizing utopian vision, the setting is hardwired, universal, and nonnegotiable.

Our greatest opportunities are now global — like spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science. Our greatest challenges also need global responses — like ending terrorism, fighting climate change, and preventing pandemics. Progress now requires humanity coming together not just as cities or nations, but also as a global community.  …

Facebook stands for bringing us closer together and building a global community. When we began, this idea was not controversial.

The reason the idea  — that community-building on a planetary scale is practicable, necessary, and altogether good — did not seem controversial in the beginning was that Zuckerberg, like Silicon Valley in general, operated in a technological bubble, outside of politics, outside of history. Now that history has broken through the bubble and upset the algorithms, history must be put back in its place. Technological determinism must again be made synonymous with historical determinism.

In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.

Infrastructure is destiny. (The word “infrastructure” appears 24 times in Zuckerberg’s message.) Society is not a fluctuating arrangement of contending and at times noxious interests brought into a tenuous equilibrium through a difficult, ongoing process of negotiation and struggle. Society is itself a technology, a built thing that, correctly constructed, “works for all of us.” Get the engineering right, and the human community will scale as a computer network scales. Global harmony becomes a technological inevitability.

Just as the internet is a network of networks, so society, in Zuckerberg’s view, is a community of communities. “Building a global community that works for everyone,” he writes, “starts with the millions of smaller communities and intimate social structures we turn to for our personal, emotional and spiritual needs.” He points to “churches” and “sports teams” as examples of “local groups” that “share important roles as social infrastructure.” They form the “sub-communities” that are then connected, as roads are connected to form a highway system, into the “global community.” He comes back to the same two examples a little later, when he writes of people “coming together around religion or sports.”

Zuckerberg’s conflation of religion and sports is odd but illuminating. In his view, the tenets of a religion matter no more than the rules of a game; what’s essential about a church and a sports team is that they both form social infrastructure that serves to “bring us together and reinforce our values.” It’s only by separating individual beliefs from community formation, and then pretending those beliefs don’t really matter, that Zuckerberg is able to sustain the fantasy that all sub-communities share a set of values — values that derive from community itself, independent of the members’ motivations in forming a group. These common values play the same role in building a global community that common standards play in building the internet: they enable seamless interconnectivity.

Zuckerberg remains oblivious to the fact that a sub-community, particularly a religious one, may be formed on a foundation of belief that is incompatible with, and in opposition to, the beliefs of the surrounding community. As the Wall Street Journal‘s Ian Lovett writes today in an article on a traditionalist Catholic community that has grown around a Benedictine monastery in Oklahoma, “The 100 or so people living here are part of a burgeoning movement among traditional Christians. Feeling besieged by secular society, they are taking refuge in communities like this one, clustered around churches and monasteries, where faith forms the backbone of daily life.” Such communities are very different from sports teams. Their formative beliefs aren’t some sort of standardized Lego infrastructure that enables the expression of universal community values. The beliefs of the individuals in the community are the values of the community, and they are anything but common standards.

The problems with Zuckerberg’s self-serving fantasy about social relations become even more pronounced when we turn to “sub-communities” of creeps and miscreants who share poisonous beliefs — neo-Nazi groups, say, or racist groups or misogynistic groups or groups of murderous ideologues (or even groups of amoral entrepreneurs who seek to make a quick buck by spreading fake news stories through the web). Here, too, the beliefs of the individual members of the community form the values of the community — values that, thankfully, are anything but common standards. “The purpose of any community is to bring people together to do things we couldn’t do on our own,” Zuckerberg writes, without any recognition that those “things” could be bad things. Even though the actions of sociopathic groups, in particular their use of Facebook and other social networks not as a metaphorical infrastructure for global harmony but as a very real infrastructure for recruitment, propaganda, planning, and organization, would seem to be one of the spurs for Zuckerberg’s message, he is blind to the way they contradict that message. Nastiness, envy, chauvinism, mistrust, distrust, anger, vanity, greed, enmity, hatred: for Zuckerberg, these aren’t features of the human condition; they are bugs in the network.

Tension and conflict, then, become technical problems, amenable to technical solutions. And so, rather than questioning Facebook’s assumptions about society — might global community-building, pursued through media structures, end up encouraging polarization and tribalism? — and the role the company plays in society, Zuckerberg ends up back where he always ends up: with a batch of new hacks. There will be new algorithmic filters, new layers of artificial intelligence, new commenting and rating systems, new techniques for both encryption and surveillance. The bugs — bad actors and bad code — will be engineered out of the system. Zuckerberg’s program, as Ars Technica’s Annalee Newitz points out, is filled with contradictions, which he either won’t acknowledge or, thanks to his techno-utopian tunnel vision, can’t see. He makes a big deal, for instance, of a new initiative through which Facebook will provide management tools for organizing what he calls “very meaningful” communities — groups characterized by passionate members under the direction of a strong leader. The example Zuckerberg offers — a group dedicated to helping refugees find homes — sounds great, but it’s not hard to see how such tools, deployed in the context of Facebook’s emotionalist echo chamber, could be used to mobilize some very nasty groups, of just the sort that Facebook is hoping to purge from its network. “The best communities in the world have leaders,” Zuckerberg said in an interview promoting his so-called manifesto. So do the worst, Mark.

Toward the end of his message, Zuckerberg writes, “In recent campaigns around the world — from India and Indonesia across Europe to the United States — we’ve seen the candidate with the largest and most engaged following on Facebook usually wins.” One might think that this observation would inspire some soul-searching on Zuckerberg’s part. But he offers it as a boast. Facebook is never the problem; it is always the solution.

No one wants to break a butterfly on a wheel, even if the butterfly is a billionaire. And only a fool would look to an official communiqué from the CEO of a big company for honest, subtle thinking about complicated social issues. And yet, in Zuckerberg’s long message, there is one moment of clarity, when he states the plain truth: “Social media is a short-form medium where resonant messages get amplified many times. This rewards simplicity and discourages nuance.” The medium, he continues, often “oversimplifies important topics and pushes us toward extremes.” This insight might have led Zuckerberg to a forthright accounting of the limitations of Facebook as a communications system. He might have pointed out that while Facebook is well designed for some things — banter among friends, the sharing of photos and videos, the coordination of group actions (for better or worse), the circulation of information in emergencies, advertising — it is ill designed for other things. It’s lousy as a news medium. It’s terrible as a forum for political discourse. It’s not the place to go to get a deep, well-rounded view of society. As a community, it’s pretty sketchy. And, he might have concluded, if you expect Facebook to solve the problems of the world, you’ve taken me far too seriously.

Image: “Lego City: Collapse” by Eirik Newth.

Anxiety and surveillance: pillars of the new economy

The terms addiction and compulsion tend to be used loosely and often interchangeably. But in an article in the Wall Street Journal, science writer Sharon Begley draws a simple but illuminating distinction between the two psychological disorders: addiction is born of pleasure, while compulsion is anxiety’s child.

Behavioral addictions begin in pleasure. But compulsions, according to a growing body of scientific evidence, are born in anxiety and remain strangers to joy. They are repetitive behaviors that we engage in repeatedly to alleviate the angst brought on by the possibility of harmful consequences.

The drunk seeks to regain the sense of well-being that the last shot of bourbon provided. The compulsive hoarder seeks to alleviate the dread that something valuable has been lost. Compulsion is both “balm and curse,” writes Begley. A compulsive act briefly mitigates feelings of anxiety, but the very experience of relief reinforces the anxiety. The anxiety ends up feeling more real, more pressing — and even more in need of relief. Anxiety and compulsion become a self-reinforcing cycle.

Compulsions can be so severe as to be debilitating. But they also, and much more routinely, take milder forms. They alter our thoughts and behavior, sometimes in deep ways, without making us dysfunctional in society. In fact, by tempering our anxiety, they may serve as a kind of therapy that protects our social functionality. Since ours is, as Auden suggested, an age of anxiety, it’s no surprise that it is also an age of compulsion.

The near-universal compulsion of the present day is, as we all know and as behavioral studies prove, the incessant checking of the smartphone. As Begley notes, with a little poetic hyperbole, we all “feel compelled to check our phones before we get out of bed in the morning and constantly throughout the day, because FOMO — the fear of missing out — fills us with so much anxiety that it feels like fire ants swarming every neuron in our brain.” With its perpetually updating, tightly personalized messaging, networking, searching, and shopping apps, the smartphone creates the anxiety that it salves. It’s a machine almost perfectly designed to turn its owner into a compulsive.

Needless to say, a portable, pocket-sized product that spurs and sustains compulsive use can be a very lucrative product for any company able to tap into its hypnotic power. The smartphone is the perfect consumer good for the age of anxiety. It’s hardly an exaggeration to say that, from a commercial standpoint, the smartphone is to compulsion what the cigarette pack was to addiction.

In a recent post, I highlighted the business scholar Shoshana Zuboff’s idea that, with the arrival of the internet, capitalism has begun to take on a new form. Traditional product-based competition (sell an attractive good at a fair price) is being displaced by data-based competition (collect the richest store of information about the identity and behavior of individual consumers). In this new industrial system, which Zuboff calls surveillance capitalism, “profits derive from the unilateral surveillance and modification of human behavior.”

While surveillance capitalism taps the invasive powers of the Internet as the source of capital formation and wealth creation, it is now, as I have suggested, poised to transform commercial practice across the real world too.  An analogy is the rapid spread of mass production and administration throughout the industrialized world in the early twentieth century, but with one major caveat. Mass production was interdependent with its populations who were its consumers and employees. In contrast, surveillance capitalism preys on dependent populations who are neither its consumers nor its employees and are largely ignorant of its procedures.

The concept of surveillance capitalism helps explain the dynamics of a growing part of the economy. But it doesn’t explain everything. It focuses on the supply side (what motivates companies) while largely ignoring the demand side (what motivates consumers). I’d suggest that the secret to understanding the demand side may lie in the anxiety-compulsion cycle. What motivates consumers is anxiety — not just the fear of missing out, but also the dread of becoming invisible or losing status, the worry that others might know something that you don’t know, the nervousness that a message might have been misconstrued, and so on — and this anxiety spurs the compulsive behavior that generates ever more personal data for surveillance capitalists to harvest. We divulge our secrets because we can’t help ourselves.

This powerful, compulsion-fueled business model may have emerged by accident — I’m pretty sure that Larry Page and Sergey Brin didn’t found Google with the intent of spreading social anxiety and then capitalizing on it through surveillance systems — but it is now sustained by design. Facebook doesn’t hire cognitive psychologists and maintain a behavioral research lab for nothing. Rewards now flow to the competitor that is best able to maximize consumer anxiety in a way that spurs more compulsive behavior that in turn generates more valuable consumer data that can, to complete the cycle, be deployed to further manipulate consumer psychology.

That’s a dark way of putting it, to be sure — it ignores the real benefits that consumers gain from many online services — but it does seem to explain the governing logic of what we once happily termed “the new economy.”

Photo: University of Alaska Anchorage.

You’ve got mail

From an essay on Radiohead by Mark Greif, in his book Against Everything:

A description of the condition of the late 1990s could go like this: At the turn of the millennium, each individual sat at a meeting point of shouted orders and appeals, the TV, the radio, the phone and cell, the billboard, the airport screen, the inbox, the paper junk mail. Each person discovered that he lived at one knot of a network, existing without his consent, which connected him to any number of recorded voices, written messages, means of broadcast, channels of entertainment, and avenues of choice. It was a culture of broadcast: an indiscriminate seeding, which needed to reach only a very few, covering vast tracts of our consciousness. To make a profit, only one message in ten thousand needed to take root; therefore messages were strewn everywhere. To live in this network felt like something, but surprisingly little in the culture of broadcast itself tried to capture what it felt like. Instead, it kept bringing pictures of an unencumbered, luxurious life, songs of ease and freedom, and technological marvels, which did not feel like the life we lived.

And if you noticed you were not represented? It felt as if one of the few unanimous aspects of this culture was that it forbade you to complain, since if you complained, you were a trivial human, a small person, who misunderstood the generosity and benignity of the message system. It existed to help you. Now, if you accepted the constant promiscuous broadcasts as normalcy, there were messages in them to inflate and pet and flatter you. If you simply said that this chatter was altering your life, killing your privacy or ending the ability to think in silence, there were alternative messages that whispered of humiliation, craziness, vanishing. What sort of crank needs silence? What could be more harmless than a few words of advice? The messages did not come from somewhere; they were not central, organized, intelligent, intentional. It was up to you to change the channel, not answer the phone, stop your ears, shut your eyes, dig a hole for yourself and get in it. Really, it was your responsibility. The metaphors in which people tried to complain about these developments, by ordinary law and custom, were pollution (as in “noise pollution”) and theft (as in “stealing our time”). But we all knew the intrusions felt like violence. Physical violence, with no way to strike back.

You’ve got mail! That old AOL audio announcement always felt perfectly anodyne — so anodyne that it almost seemed fated to become the hook for a romcom starring Tom Hanks and Meg Ryan. Yet at the same time, and this has become clearer in retrospect, it was a threat. The computerized voice, chipper, friendly, always feigning surprise and excitement at the news it delivered, carried a demanding and judgmental undertone. It was a parental voice. You’ve got mail — and you need to go to your inbox and attend to this new mail quickly. Right now, in fact. Only a churlish, sad, unsociable creep would let a new message sit unread in an inbox. You don’t want to be a churlish, sad, unsociable creep, do you?

A threat, and a prophecy. Even as AOL faded away, you’ve got mail burrowed deeper into our consciousness. It became more than a voice in our heads. It became the voice in our heads. It was never a voice of our own — it comes out of the mouth of a stranger, a stranger with an agenda — and yet it now runs through our minds as if on a continuous tape loop. Its implicit command no longer feels like a command. It feels, almost, like a natural phenomenon. We do its bidding intuitively. To be irritated by the voice, even in passing, is, as Greif suggests, to admit to a smallness of self. And so we bury ever deeper that sense of being violated. It’s not the messages that matter anymore. Messages come and go. It’s the messaging that matters. Messaging has become our state of being, the atmosphere in our heads.

And, yes, you’ve got mail.

From Fordism to Googlism

From “The Watchers,” an article by Jonathan Shaw in the new issue of Harvard Magazine:

[Shoshana] Zuboff says that corporate use of personal data has set society on a path to a new form of capitalism that departs from earlier norms of market democracy. She draws an analogy from the perfection of the assembly line: Ford engineers’ discovery a century ago, after years of trial and error, that they had created “a logic of high-volume, low-unit cost, which really had never existed before with all the pieces aligned.” Today, many corporations follow a similar trajectory by packaging personal data and behavioral information and selling it to advertisers: what she calls “surveillance capitalism.”

“Google was ground zero,” Zuboff begins. At first, information was used to benefit end users, to improve searches, just as Apple and Amazon use their customers’ data largely to customize those individuals’ online experiences. Google’s founders once said they weren’t interested in advertising. But Google “didn’t have a product to sell,” she explains, and as the 2001 dot.com bubble fell into crisis, the company was under pressure to transform investment into earnings. “They didn’t start by saying, ‘Well, we can make a lot of money assaulting privacy,’” she continues. Instead, “trial and error and experimentation and adapting their capabilities in new directions” led them to sell ads based on personal information about users. Like the tinkerers at Ford, Google engineers discovered “a way of using their capabilities in the context of search to do something utterly different from anything they had imagined when they started out.” Instead of using the personal data to benefit the sources of that information, they commodified it, using what they knew about people to match them with paying advertisers. As the advertising money flowed into Google, it became a “powerful feedback loop of almost instantaneous success in these new markets.”

“Those feedback loops become drivers themselves,” Zuboff explains. “This is how the logic of accumulation develops … and ultimately flourishes and becomes institutionalized. That it has costs, and that the costs fall on society, on individuals, on the values and principles of the liberal order for which human beings have struggled and sacrificed much over millennia—that,” she says pointedly, “is off the balance sheet.”

Privacy values in this context become externalities, like pollution or climate change, “for which surveillance capitalists are not accountable.” In fact, Zuboff believes, “Principles of individual self-determination are impediments to this economic juggernaut; they have to be vanquished. They are friction.” The resulting battles will be political. They will be fought in legislatures and in the courts, she says. Meanwhile, surveillance capitalists have learned to use all necessary means to defend their claims, she says: “through rhetoric, persuasion, threat, seduction, deceit, fraud, and outright theft. They will fight in whatever way they must for this economic machine to keep growing. … This is an economic logic that must delete privacy in order to be successful.”

The Uber advantage

The Guardian reports:

Uber has admitted that there is a “problem” with the way autonomous vehicles cross bike lanes, raising serious questions about the safety of cyclists days after the company announced it would openly defy California regulators over self-driving vehicles.

Maybe it’s the bicycle riders who are the “problem” here. You’d think they’d have sense enough to get out of the way of the future, particularly in San Francisco.

Uber will lose some $3 billion this year, after losing $2.2 billion last year. Even by the exuberant standards of the internet industry, the company is a remarkably effective cash-burning machine.* By comparison, the largest annual loss posted by Amazon.com, no slouch when it comes to losing money, totaled $1.4 billion, back in 2000.

We’re often told that companies like Uber and Amazon are masters of business innovation and industry disruption. But an argument could be made that what they’re really masters of is getting investors, whether in public or private markets, to cover massive losses over long periods of time. The generosity of the capital markets is what allows Uber and its ilk to subsidize purchases by customers, again on a massive scale and over many years. It’s worth asking whether these subsidies are the real engine behind much of the tech industry’s vaunted wave of disruption. After all, the small businesses being disrupted — local taxi companies and book shops, for instance — don’t have sugar daddies underwriting their existence. They actually have to make money, day after day, to pay their employees and their bankers. They have to charge real prices, not make-believe ones.

Some will argue that the capital markets are acting rationally, investing for future returns. But if those future returns are predicated on the killing off of competitors through years of investor-subsidized predatory pricing and other economically dubious behavior, how rational are the capital market’s actions, really? At some point, it starts to smell like a market failure rather than a market success.

Uber will reportedly meet with officials from California’s attorney general and motor vehicles departments later today to discuss its rollout of self-driving taxis in apparent violation of state law. The company likes to present itself as a juggernaut, an inevitability, but really it’s more of a paper tiger. It may have succeeded in exempting itself from the rule of economics, but it shouldn’t be allowed to exempt itself from the rule of law.

______________

*It’s hardly a surprise that president-elect Donald Trump would pick Uber CEO Travis Kalanick as one of his strategic advisers. The league of gentlemen who require ten figures to report their annual losses is quite small.

Thomas Schelling, polarization and the web

5731411828_1b3312d18d_z

Thomas Schelling has died. Schelling’s pathbreaking work in game theory had enormous influence during the Cold War and ultimately earned him a Nobel Prize. It also helps illuminate some of the unexpected consequences of the internet as a medium for information-gathering and conversation — in particular the technology’s tendency to breed ideological polarization (a tendency that shaped political discourse during the recent presidential campaign). In my 2008 book The Big Switch, I discussed how one of Schelling’s papers, “Dynamic Models of Segregation,” holds important lessons for making sense of social dynamics online:

In 1971, the economist Thomas Schelling performed a simple experiment that had a very surprising result. He was curious about the persistence of extreme racial segregation in the country. He knew that most Americans are not racists or bigots, that we’re generally happy to be around people who don’t look or think the same way we do. At the same time, he knew that we’re not entirely unbiased in the choices we make about where we live and whom we associate with. Most of us have a preference, if only a slight one, to be near at least some people who are similar to ourselves. We don’t want to be the only black person or white person, or the only liberal or conservative, on the block. Schelling wondered whether such small biases might, over the long run, influence the makeup of neighborhoods.

He began his experiment by drawing a grid of squares on a piece of paper, creating a pattern resembling an oversized checkerboard. Each square represented a house lot. He then randomly placed a black or a white marker in some of the squares. Each marker represented either a black or a white family. Schelling assumed that each family desired to live in a racially mixed neighborhood, and that’s exactly what his grid showed at the start: the white families and the black families were spread across the grid in an entirely arbitrary fashion. It was a fully integrated community. He then made a further assumption: that each family would prefer to have some nearby neighbors of the same color as themselves. If the percentage of neighbors of the same color fell beneath 50 percent, a family would have a tendency to move to a new house.

On the internet, making a community-defining decision
is as simple as clicking a link.

On the basis of that one simple rule, Schelling began shifting the markers around the grid. If a black marker’s neighbors were more than 50 percent white or if a white marker’s neighbors were more than 50 percent black, he’d move the marker to the closest unoccupied square. He continued moving the pieces until no marker had neighbors that were more than 50 percent of the other color. At that point, to Schelling’s astonishment, the grid had become completely segregated. All the white markers had congregated in one area, and all the black markers had congregated in another. A modest, natural preference to live near at least a few people sharing a similar characteristic had the effect, as it influenced many individual decisions, of producing a dramatic divide in the population. “In some cases,” Schelling explained, “small incentives, almost imperceptible differentials, can lead to strikingly polarized results.”

It was a profound insight, one that, years later, would be cited by the Royal Swedish Society of Sciences when it presented Schelling with the 2005 Nobel Prize in Economics. Mark Buchanan, in his book Nexus, summarized the broader lesson of the experiment well: “Social realities are fashioned not only by the desires of people but also by the action of blind and more or less mechanical forces—in this case forces that can amplify slight and seemingly harmless personal preferences into dramatic and troubling consequences.”

Just as it’s assumed that the Internet will create a rich and diverse culture, it’s also assumed that it will bring people into greater harmony, that it will breed greater understanding and help ameliorate political and social tensions. On the face of it, that expectation seems entirely reasonable. After all, the Internet erases the physical boundaries that separate us, allows the free exchange of information about the thoughts and lives of others, and provides an egalitarian forum in which all views can get an airing. The optimistic view was perhaps best expressed by Nicholas Negroponte, the head of MIT’s Media Lab, in his 1995 bestseller Being Digital. “While the politicians struggle with the baggage of history, a new generation is emerging from the digital landscape free of many of the old prejudices,” he wrote. “Digital technology can be a natural force drawing people into greater world harmony.”

But Schelling’s experiment calls this view into question. Not only will the process of polarization tend to play out in virtual communities in the same way it does in neighborhoods, but it seems likely to proceed much more quickly online. In the real world, with its mortgages and schools and jobs, the mechanical forces of segregation move slowly. There are brakes on the speed with which we pull up stakes and move to a new house. Internet communities have no such constraints. Making a community-defining decision is as simple as clicking a link. Every time we subscribe to a blog, add a friend to our social network, categorize an e-mail message as spam, or even choose a site from a list of search results, we are making a decision that defines, in some small way, whom we associate with and what information we pay attention to. Given the presence of even a slight bias to be connected to people similar to ourselves—ones who share, say, our political views or our cultural preferences—we would, like Schelling’s hypothetical homeowners, end up in ever more polarized and homogeneous communities. We would click our way to a fractured society.

Image: “Checkers” by Emily Cline.

I WANT WINGS!!!!!!!!!!!!!!

gowy-icaro-prado

What is divinity if it can come
Only in silent shadows and in dreams?
— Wallace Stevens, “Sunday Morning”

I. Angels and Superheroes

In 2008, Samuel O. Poore, a plastic surgeon who teaches at the University of Wisconsin’s medical school, published an article in the Journal of Hand Surgery titled “The Morphological Basis of the Arm-to-Wing Transition.” Drawing on evolutionary and anatomical evidence, he laid out a workable method for using the techniques of modern reconstructive surgery, including bone fusing and skin and muscle grafting, to “fabricate human wings from human arms.” Although the wings, in the doctor’s estimation, would not be capable of generating the lift needed to get a person off the ground, they might nonetheless serve “as cosmetic features simulating, for example, the nonfunctional wings of flightless birds.”

We have always envied birds their wings. From angels to superheroes, avian-human hybrids have been fixtures of myth, legend, and art. In the ninth century, the celebrated Andalusian inventor Abbas ibn Firnas fashioned a pair of wings out of wood and silk, attached them to his back, covered the rest of his body in feathers, and jumped from a promontory. He avoided the fate of his forebear Icarus, but “in alighting,” a witness reported, “his back was very much hurt.” Leonardo da Vinci sketched scores of plans for winged, human-powered flying machines called ornithopters. Batman’s pinion-pointed cape looms over popular culture. Birdman won the best picture Oscar in 2015. “Red Bull gives you wings,” promise the energy drink’s ads.

Dr. Poore considered his paper a thought experiment, and he ended it with an admonition: “Humans should remain human, staying on the ground pondering and studying the intricacies of flight while letting birds be birds and angels be angels.” Not everyone shared his caution. Advocates of radical human enhancement, or transhumanism, found inspiration in the article. One of them, writing on a popular transhumanist blog, suggested that it might soon be possible to craft working human wings by combining surgical techniques with synthetic muscles and genetic modifications. “Many humans have wished they could fly,” the blogger wrote. “There’s nothing morally wrong with granting that wish.” The post garnered more than seven hundred comments. “I WANT WINGS!!!!!!!!!!!!!!” went a typical one. “For as long as i can remember i have been longing to feel the wind in my feathers” went another.

birdman

II. Stronger, Smarter, Fitter

When Nora Ephron decided to call her 2006 essay collection I Feel Bad about My Neck, she all but guaranteed herself a bestseller. Prone to wattling and wrinkling, banding and bagging, the neck has long been a focal point of people’s discontent with their bodies. But it’s hardly the only body part that provokes disappointment and frustration. From miserly hair follicles to yellowed toenails, from forgetful brains to balky bowels, the body seems intent on reminding us of its flaws and insufficiencies. One thing that sets us apart from our animal kin is our ability to examine our bodies critically, as if they were things separate from ourselves. We may not think of ourselves as Cartesians anymore, but we remain dualists when it comes to distinguishing the self from its physical apparatus. And it’s this ability to envision our bodies as instruments that allows us to imagine ways we might remodel or retrofit our anatomies to better reflect our desires and ideals. Our minds are always drafting new blueprints for our bodies.

We’re quick to associate body modification with primitive cultures—the stereotypical savage with the bone in his nose—but that’s a self-flattering fancy, a way to feel enlightened and civilized at the expense of others. When it comes to fiddling with the human body, we make even the most brutish of our ancestors look like amateurs. We go under the blade for nose jobs, tummy tucks, breast augmentations, hair transplants, face lifts, butt lifts, liposuctions, and myriad other cosmetic surgeries. We smooth our skin with dermabrasion brushes or chemical peels, conceal wrinkles with injections of botulinum toxin or hyaluronic filler. We brighten our smiles with whiteners and veneers, implants and orthodontia. We tattoo, pierce, and scarify our flesh. We swallow drugs and other potions to fine-tune our moods, sharpen our thinking, bulk up our musculature, control our fertility, and heighten our sexual prowess and pleasure. If to be transhuman is to use technology to change one’s body from its natural state, for ornamental or functional purposes, then we are all already transhuman.

But our tinkering, however impressive, is only a prelude. The ability of human beings to alter and augment themselves is set to expand enormously in the decades ahead, thanks to a convergence of scientific and technical advances in such areas as robotics, bioelectronics, genetic engineering, and pharmacology. Up to now, body modifications have tended to be decorative or therapeutic. They’ve been used to improve or otherwise change people’s looks or to repair damage from illnesses or wounds. Rarely have they offered people ways to transcend the body’s natural limits. The future will be different. Progress in the field broadly known as biotechnology promises to make us stronger, smarter, and fitter, with sharper senses and more capable minds and bodies. Transhumanists have good reason to be excited. By the end of the twenty-first century what it means to be human is likely to be very different from what it means today.

Our minds are always drafting new blueprints
for our bodies.

War and medicine are the crucibles of human enhancement. They’re where the need is pressing, the money plentiful. Military researchers, building on recent refinements in prosthetic arms and legs, are testing so-called Iron Man suits—artificial exoskeletons worn inside uniforms—that give soldiers greater strength, agility, and endurance. Wearing one current version, a G.I. can run a four-minute mile while carrying a full load of gear. Prototypes of more sophisticated bionic armor, which can sharpen vision, enhance situational awareness, and regulate body temperature along with boosting mobility and muscle, are in testing by the U.S. Special Operations Command. The merging of man and machine is well under way.

That goes for the gray matter, too. In 2014, DARPA, the military’s R&D arm, established a well-financed Biological Technologies Office to work on the frontiers of human enhancement. The new division’s broad portfolio includes a raft of ambitious neuroengineering projects aimed at bolstering mental skill and accomplishment on and off the battlefield. In the works are brain implants that, in the agency’s words, “facilitate the formation of new memories and retrieval of existing ones,” neural interfaces that “reliably extract information from the nervous system . . . at a scale and rate necessary to control complex machines,” and centimeter-sized neural “modems” that allow high-speed, standardized data transmissions between brains and computers.

While neuroscientists are still a long way from understanding consciousness and thought, they are, as the DARPA projects suggest, having success in reverse engineering many cognitive and sensory functions. Whenever knowledge of the brain expands, so too do the possibilities for designing tools to manipulate and augment mental processes. Cochlear implants, which translate sound waves into electrical signals and transmit them to the brain’s auditory nerve, have already given tens of thousands of deaf people the ability to hear. In 2013, the Food and Drug Administration approved the first retinal implant. It gives sight to the blind by wiring a digital camera to the optic nerve. Scientists at Case Western Reserve University are developing a brain chip that monitors and adjusts levels of neurotransmitters, like dopamine, that regulate brain functions. The researchers say the chip, which has been successfully tested in mice, works like a “home thermostat” for mental states.

Many such neural devices are in the early stages of development, and most are designed to aid the sick or disabled. But neuroengineering is progressing swiftly, and there’s every reason to believe that implants and interfaces will come to be used by healthy people to gain new and exotic talents. “Advances in molecular biology, neuroscience and material science are almost certainly going to lead, in time, to implants that are smaller, smarter, more stable and more energy-efficient,” brain scientists Gary Marcus and Christof Koch explained in a 2014 Wall Street Journal article. “When the technology has advanced enough, implants will graduate from being strictly repair-oriented to enhancing the performance of healthy or ‘normal’ people.” We’ll be able to use them to improve memory, focus, perception, and temperament, the scientists wrote, and, eventually, to speed the development of manual and mental skills by automating the assembly of neural circuitry.

These examples all point to a larger truth, one that lies at the heart of the transhumanist project. The human species is, in form and function, subject to biological constraints. It changes at the glacial pace of evolution. As soon as we augment the body with machinery and electronics, we accelerate the speed at which it can change. We shift the time scale of physiological adaptation from the natural, measured in millennia, to the technological, which plays out over decades, years, or mere months. Biology, when seen from a human perspective, is more about stasis than change. But when it comes to technology, nothing stands still. What’s rudimentary today can be revolutionary tomorrow.

winged-soldier

III. The Daedalus Mission

The changes wrought by prosthetics, implants, and other hardware will play out in plain view. Blurring the line between tools and their users, they will turn people into what science fiction writers like to call cyborgs. More profound may be the microscopic changes accomplished by manipulating chemical reactions within and between cells. Advances in neurobiology have made possible a new generation of psychoactive drugs that will give individuals greater control over how their minds work. Exploiting the recent discovery that memories are malleable—they seem to change each time they’re recalled—researchers are testing drugs that can, by blocking chemicals involved in memory formation, delete or rewrite troubling memories as they’re being retrieved by the mind. A study by two Dutch psychologists, published in the journal Biological Psychiatry in 2015, shows that similar “amnesic” medications may be able to erase deep-seated phobias, such as a fear of spiders or strangers, by scrubbing certain memories of their emotional connotations. Such pharmaceutical tools point to a future in which we will be able to revise our sense of the past and, since what we remember is what we are, shape the self.

Also coming out of pharmaceutical labs are drugs that accelerate learning and heighten intelligence by speeding up neuronal activity, tamping down extraneous brain signals, and stimulating new connections among nerve cells. As with brain implants, these so-called smart drugs, or neuroenhancers, are intended to be used medicinally—to help children with Down syndrome do better at school or to combat mental decay in the elderly—but they also hold promise for making able-minded people cleverer. For years now, drugs designed to treat attention and sleep disorders, like Adderall and Provigil, have been used by students and professionals to sharpen their mental focus and increase their productivity. As new drugs for cognitive enhancement become available, they too will see widespread “off-label” use. Eventually, well-vetted neuroenhancers that don’t produce severe side effects will be cleared for general use. The economic and social advantages of enhanced intelligence, however narrowly defined, will override medical and moral qualms. Cosmetic neurology will join cosmetic surgery as a consumer choice.

Then there’s genetic engineering. The much discussed gene-editing tool Crispr, derived from bacterial immune systems, has in just the last three years transformed genomic research. Scientists can rewrite genetic code with far greater speed and precision, and at far lower cost, than was possible before. In simple terms, Crispr pinpoints a target sequence of DNA on a gene, uses a bacterial enzyme to snip the sequence out, and then splices a new sequence in its place. The inserted genetic material doesn’t have to come from the same species. Scientists can mix and match bits of DNA from different species, creating real-life chimeras.

With thousands of academic and corporate researchers, not to mention scores of amateur biohackers, experimenting with Crispr, progress in genome editing has reached a “breakneck pace,” according to Jennifer Doudna, a University of California biochemist who helped develop the tool. Combined with ever more comprehensive genomic maps, Crispr promises to expand the bounds of gene therapy, giving doctors new ways to repair disease-causing mutations and anomalies in DNA, and may allow transplantable human organs to be grown in pigs and other animals. Crispr also brings us a step closer to a time when genetic engineering will be practicable for a variety of human enhancements, at both the individual and the species level. Wings are not out of the question.

Transhumanists are technology enthusiasts, and technology enthusiasts are not the most trustworthy guides to the future. Their speculations tend to spiral into sci-fi fantasies. Some of the most hyped biotechnologies will fail to materialize or will fall short of expectations. Others will take longer to pan out than projections suggest. As innovation researchers Paul Nightingale and Paul Martin point out in an article in the journal Trends in Biotechnology, the translation of scientific breakthroughs into practical technologies remains “more difficult, costly and time-consuming” than is often supposed. That’s particularly true of medical procedures and pharmaceutical compounds, which often require years of testing and tweaking before they’re ready for the market. Although Crispr is already being used to reengineer goats, monkeys, and other mammals—Chinese researchers have created beagles with twice the normal muscle mass—scientists believe that, barring rogue experiments, clinical testing on people remains years away.

What was seen as a perversion comes to be viewed
as a remedy or a refinement, decent and even natural.

But even taking a skeptical view of biotechnology, discounting wishful forecasts of immortality, designer babies, and computer-generated superintelligence, it’s clear that we humans are in for big changes. The best evidence is historical rather than hypothetical. In just the past decade, many areas of biotechnology, particularly those related to genomics and computing, have seen extraordinary gains. The advances aren’t going to stop, and experience suggests they’re more likely to accelerate than to slow. Whether techniques of radical human enhancement arrive in twenty years or fifty, they will arrive, and in their wake will come newer ones that we have yet to imagine.

In 1923, the English biologist J. B. S. Haldane gave a lecture before the Heretics Society in Cambridge on how science would shape humanity in the future. His view was optimistic, if warily so. He surveyed advances in physics that were likely “to render life more and more complex, artificial, and rich in possibilities.” He suggested that chemists would soon discover psychoactive compounds that would “add to the amenity of life and promote the expression of man’s higher faculties.” But it was the biological sciences, he predicted, that would bring the greatest changes. Progress in understanding the functioning of the body, the working of the brain, and the mechanics of heredity was setting the stage for “man’s gradual conquest” of his own physical and mental being. “We can already alter animal species to an enormous extent,” he observed, “and it seems only a question of time before we shall be able to apply the same principles to our own.”

Society would, Haldane felt sure, defer to the scientist and the technologist in defining the boundaries of the human species. “The scientific worker of the future,” he concluded, “will more and more resemble the lonely figure of Daedalus as he becomes conscious of his ghastly mission, and proud of it.”

louvre

IV. A Truer You

LeBron James is an illustrated man. The NBA star has covered his body with some forty tattoos, each selected to symbolize an aspect of his life or beliefs. The phrase “CHOSEN 1” is inscribed in a heavy gothic script across his upper back. On his chest, spanning his pectorals, is an image of a manticore, the legendary winged lion with the face of a man. Across his biceps runs the motto “What we do in life echoes in eternity,” words spoken by Maximus, the Russell Crowe character, in the movie Gladiator. It wasn’t long ago that tattoos were considered distasteful or even grotesque, the marks of drunken sailors, carnival geeks, and convicts. Now they’re everywhere. Americans spend well over a billion dollars a year at tattoo parlors, more than a third of young adults sport at least one tattoo, and celebrities like James take pride in branding themselves with elaborate and evocative ink. The taboo has gone mainstream.

That’s the usual trajectory for body modifications. First we recoil from them, then we get used to them, then we embrace them. In his talk, Haldane acknowledged that society will initially resist any new attempt to refashion human beings:

There is no great invention, from fire to flying, which has not been hailed as an insult to some god. But if every physical and chemical invention is a blasphemy, every biological invention is a perversion. There is hardly one which, on first being brought to the notice of an observer from any nation which has not previously heard of their existence, would not appear to him as indecent and unnatural.

In time, attitudes change. Custom cures queasiness. What was seen as a perversion comes to be viewed as a remedy or a refinement, decent and even natural. The shapeshifter, once a pariah, becomes a pioneer, a hero.

Society’s changing feelings toward tattoos is a fairly trivial example of such cultural adaptation. More telling is the way views of sex-reassignment procedures, the most drastic of commonly performed body modifications, have evolved. Voluntary sex-change operations date back at least to the ancient world, when they amounted to little more than clumsy castrations, but it wasn’t until the 1930s that advances in surgical techniques and hormone therapies made sex reassignment technologically viable. Through the middle years of the twentieth century, when sex-change procedures remained rare, medically controversial, and fraught with legal obstacles, Americans tended to view transsexuals as at best freaks and at worst criminals. But the stigma dissipated through the latter years of the century, as sex-reassignment therapies became more sophisticated and routine and societal perceptions and norms changed. Today, although transsexuals still face prejudice, the public is coming to view sex-change procedures, whether surgical or chemical, not as treatments for unfortunate medical disorders but as ways to bring one’s body into alignment with one’s true identity. When Olympian Bruce Jenner came out as Caitlyn Jenner in the spring of 2015, she was feted by the media and praised by the president.

The history of transsexuality, writes Yale historian Joanne Meyerowitz in How Sex Changed, illuminates our times. Not only does it demonstrate “the growing authority of science and medicine,” but it also “illustrates the rise of a new concept of the modern self that placed a heightened value on self-expression, self-improvement, and self-transformation.” The perception of gender as a matter of inclination rather than biology, as a spectrum of possibilities rather than an innate binary divide, remains culturally and scientifically contentious. But its growing acceptance, particularly among the young, reveals how eager we are, whenever science grants us new powers over our bodies’ appearance and workings, to redefine human nature as malleable, as a socially and personally defined construct rather than an expression of biological imperatives. Advances in biotechnology may be unsettling, but in the end we welcome them because they give us greater autonomy in remaking ourselves into what we think we should be.

hawkgirl

V. The Transhuman Condition

Transhumanism is “an extension of humanism,” argues Nick Bostrom, an Oxford philosophy professor who has been one of the foremost proponents of radical human enhancement. “Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism.” Human nature in its current state, he says, is just the “half-baked beginning” of a “work-in-progress,” which we can now begin to “remold in desirable ways.” In pursuing this project, “we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as ‘human.’” The ultimate benefit of transhumanism, in Bostrom’s view, is that it expands “human potential,” giving individuals greater freedom “to shape themselves and their lives according to their informed wishes.” Transhumanism unchains us from our nature.

Other transhumanists take a subtly different tack in portraying their beliefs as part of the humanistic tradition. They suggest that the greatest benefit of radical enhancement is not that it allows us to transcend our deepest nature but rather to fulfill it. “Self-reconstruction” is “a distinctively human activity, something that helps define us,” writes Duke University bioethicist Allen Buchanan in his book Better Than Human. “We repeatedly alter our environment to suit our needs and preferences. In doing this we inevitably alter ourselves as well. The new environments we create alter our social practices, our cultures, our biology, and even our identity.” The only difference now, he says, “is that for the first time we can deliberately, and in a scientifically informed way, change our selves.” We can extend the Enlightenment into our cells.

Critics of radical human enhancement, often referred to as bioconservatives, take the opposite view, arguing that transhumanism is antithetical to humanism. Altering human nature in a fundamental way, they contend, is more likely to demean or even destroy the human race than elevate it. Some of their counterarguments are pragmatic. By tinkering with life, they warn, researchers risk opening a Pandora’s box, inadvertently unleashing a biological or environmental catastrophe. They also caution that access to expensive enhancement procedures and technologies is likely to be restricted to economic or political elites. Society may end up riven into two classes, with the merely normal masses under the bionic thumbs of an oligarchy of supermen. They worry, too, that as people gain prodigious intellectual and physical abilities, they’ll lose interest in the very activities that bring pleasure and satisfaction to their lives. They’ll suffer “self-alienation,” as New Zealand philosopher Nicholas Agar puts it.

But at the heart of the case against transhumanism lies a romantic belief in the dignity of life as it has been given to us. There is an essence to humankind, bioconservatives believe, from which springs both our strengths and our defects. Whether bestowed by divine design or evolutionary drift, the human essence should be cherished and protected as a singular gift, they argue. “There is something appealing, even intoxicating, about a vision of human freedom unfettered by the given,” writes Harvard professor Michael J. Sandel in The Case against Perfection. “But that vision of freedom is flawed. It threatens to banish our appreciation of life as a gift, and to leave us with nothing to affirm or behold outside our own will.” As a counter to what they see as misguided utilitarian utopianism, bioconservatives counsel humility.

Wherever they may lead us, our attempts to change human nature
will be governed by human nature.

The transhumanists and the bioconservatives are wrestling with the largest of questions: Who are we? What is our destiny? But their debate is a sideshow. Intellectual wrangling over the meaning of humanism and the fate of humanity is not going to have much influence over how people respond when offered new opportunities for self-expression, self-improvement, and self-transformation. The public is not going to approach transhumanism as a grand moral or political movement, a turning point in the history of the species, but rather as a set of distinct products and services, each offering its own possibilities. Whatever people sense is missing in themselves or their lives they will seek to acquire with whatever means available. And as standards of beauty, intelligence, talent, and status change, even those wary of human enhancement will find it hard to resist the general trend. Wherever they may lead us, our attempts to change human nature will be governed by human nature.

We are myth makers as well as tool makers. Biotechnology allows us to merge these two instincts, giving us the power to refashion the bodies we have and the lives we lead to more closely match those we imagine for ourselves. Transhumanism ends in a paradox. The rigorously logical work that scientists, doctors, engineers, and programmers are doing to enhance and extend our bodies and minds is unlikely to raise us onto a more rational plane. It promises, instead, to return us to a more mythical existence, as we deploy our new tools in an effort to bring our dream selves more fully into the world. LeBron James’s body art, with its richly idiosyncratic melding of Christian and pagan iconography, all filtered through a pop-culture sensibility, feels like an augury.

“I want to fly!” cries Icarus in the labyrinth. “And so you shall,” says Daedalus, his father, the inventor. It’s an old story, but we’re still in it, playing our parts.

brewster-mccloud

VI. Another Tangent

Just before twilight on a Saturday evening in the spring of 2015, the storied rock climber Dean Potter walked with his girlfriend, Jen Rapp, and his buddy, Graham Hunt, from a parking area along Glacier Point Road to the edge of Taft Point, some three thousand feet above the Merced River in Yosemite Valley. Potter and Hunt were planning a BASE jump. They would leap from a ledge near the point and glide in their wingsuits for a quarter mile over the valley before passing through a notch in a ridgeline near a rock outcropping called Lost Brother. They would then unfurl their parachutes and come in for a landing in a clearing on the valley floor. Rapp would serve as spotter and photographer.

BASE jumping, one of the more extreme of extreme sports, is banned in national parks. But Hunt and Potter were dedicated daredevils who didn’t put much stock in rules. They had been jumping for years from cliffs and peaks throughout Yosemite, including the iconic Half Dome, and they had wingsuited from Taft Point several times, together and separately. The course they set for themselves that evening was dangerous—the notch was narrow, the winds contrary—but they were confident in their skills and their equipment. Potter held the mark for the longest wingsuit flight on record, having covered nearly five miles in a 2011 jump from the Eiger in Switzerland, and he had been featured in a National Geographic documentary called The Man Who Can Fly. Hunt, too, was considered one of the world’s top jumpers.

We are myth makers as well as tool makers.

The first wingsuiter, if you don’t count Abbas ibn Firnas, was Franz Reichelt. A Vienna-born tailor who ran a dressmaking shop in Paris, he designed and stitched his own “parachute suit,” as he called the winged garment. He tested it by jumping off the Eiffel Tower on February 4, 1912. The suit failed, and the fall killed him. More than eighty years passed before a Finnish company called BirdMan International began manufacturing reliable wingsuits and selling them to skydivers and BASE jumpers. Constructed of lightweight, densely woven nylon, modern wingsuits sheath the jumper’s entire body, forming two wings between the arms and torso and another between the legs. By greatly expanding the surface area of the human frame, the suits create enough lift to allow a person to glide downward for several minutes while controlling trajectory through slight movements of the shoulders, hips, and knees. Wingsuiters frequently reach speeds of a hundred miles an hour or more, giving them an exhilarating sense that they’re actually flying.

Potter and Hunt reached the launching spot near Taft Point around seven o’clock and zipped themselves into their wingsuits. Potter jumped first, followed quickly by Hunt, while Rapp shot pictures from a few yards away. The two jumpers dropped like stones for a couple of seconds before their suits filled with air. Then, their bodies buoyant, they soared across the mountain sky with wings outstretched, like a pair of giant, brightly colored birds. “Part of me says it’s kind of crazy to think you can fly your human body,” Potter had told a New York Times reporter a few years earlier. “Another part of me thinks all of us have had the dream that we can fly. Why not chase after it? Maybe it brings you to some other tangent.”

Jen Rapp kept taking photographs until Potter and Hunt passed through the notch and out of sight. She thought she heard something, a couple of thumps, but she told herself it was probably just the chutes opening. She waited for the text message that would let her know the pair had landed safely. Nothing came. Her phone was silent. Park rangers recovered the bodies the next morning.

______________________
This essay is excerpted from my latest book, Utopia Is Creepy

Images (from top): “The Flight of Icarus” by Jacob Peter Gowy (after a sketch by Rubens); still from the movie Birdman; photograph “Paris: Arc de Triomphe de l’Étoile – La Marseillaise” by Wally Gobetz; Assyrian relief of human-headed winged bull, from the Louvre; Hawkgirl (DC Comics); still from the movie Brewster McCloud.