Category Archives: Uncategorized

Twilight of the idylls

bandit

The Silicon Valley guys have a new hobby: driving fast cars around private tracks. They love it. “When you’re really in the zone in a racecar, it’s almost meditative,” Google executive Jeff Huber tells the Times’s Farhad Manjoo. Adds Yahoo senior vice president Jeff Bonforte, “Your brain is so happy that it washes over you.” The Valley guys are a little nervous about the optics of their pastime — “Try to tone down the rich guy hobby thing,” angel investor and ex-Googler Joshua Schachter instructs Manjoo — but the “visceral thrill” of driving has nevertheless made it “the Valley’s ‘it’ hobby.”

The Valley guys are rushing to rent out racetracks and strap themselves into Ferraris at the very moment that they’re telling the rest of us how miserable driving is, and how liberated we’ll all feel when robots take the wheel. Jazzed by a Googler’s Ted Talk on driverless cars, MIT automation expert Andrew McAfee says that the Googlemobile will “free us from a largely tedious task.” Writes Wired transport reporter Alex Davies, “Liberated from the need to keep our hands on the wheel and eyes on the road, drivers will become riders with more time for working, leisure, and staying in touch with loved ones.” When Astro Teller, head of Google X, watches people drive by in their cars, all he hears is a giant sucking sound, as potentially productive minutes pour down the drain of a vast timesink. “There’s over a trillion dollars of wasted time per year we could collectively get back if we didn’t have to pay attention while the car took us from one place to another,” he said in a South by Southwest keynote this month.

Driving on a private track may be pleasantly meditative, even joy-inducing, but driving on public thoroughfares is just a drag.

What’s curious here is that the descriptions of everyday driving offered with such confidence by the avatars of driverlessness are at odds with what we know about people’s actual attitudes toward and experience of driving. People like to drive. Surveys and other research consistently show that most of us enjoy being behind the wheel. We find driving relaxing and fun and even, yes, liberating — a respite from the demands of our workaday lives. Seeing driving as a “problem” because it prevents us from being productive gets the story backwards. What’s freeing about driving is the very fact that it gives us a break from the pressure to be productive.

That doesn’t mean we’re blind to automotive miseries. When researchers talk to people about driving, they hear plenty of complaints about traffic jams and grinding commutes and bad roads and parking hassles and all the rest. Our attitudes toward driving are complex, always have been, but on balance we like to have our hands on the wheel and our eyes on the road, not to mention our foot on the gas. About 70 percent of Americans say they “like to drive,” while only about 30 percent consider it “a chore,” according to a 2006 Pew survey. A survey of millennials, released earlier this year by MTV, found that, contrary to common wisdom, most young people enjoy cars and driving, too. Seventy percent of Americans between the ages of 18 and 34 say they like to drive, and 72 percent of them say they’d rather give up texting for a week than give up their car for the same period. The percentage of people who like to drive has fallen a bit in recent years as traffic has worsened — 80 percent said they liked to drive in a 1991 Pew survey — but it’s still very high, and it belies the dreary picture of driving painted by Silicon Valley. You don’t have to be wealthy enough to buy a Porsche or to rent out a racetrack to enjoy the meditative and active pleasures of driving. They can be felt on the open road as well as the closed track.

In suggesting that driving is no more than a boring, productivity-sapping waste of time, the Valley guys are mistaking a personal bias for a universal truth. And they’re blinding themselves to the social and cultural challenges they’re going to face as they try to convince people to be passengers rather than drivers. Even if all the technical hurdles to achieving perfect vehicular automation are overcome — and despite rosy predictions, that remains a sizable if — the developers and promoters of autonomous cars are going to discover that the psychology of driving is far more complicated than they assume and far different from the psychology of being a passenger. Back in the 1970s, the public rebelled, en masse, when the federal government, for seemingly solid safety and fuel-economy reasons, imposed a national 55-mile-per-hour speed limit. The limit was repealed. If you think that everyone’s going to happily hand the steering wheel over to a robot, you’re probably delusional.

There’s something bigger going on here, and I confess that I’m still a little fuzzy about it. Silicon Valley seems to have a good deal of trouble appreciating, or even understanding, what I’ll term informal experience. It’s only when driving is formalized — removed from everyday life, transferred to a specialized facility, performed under a strict set of rules, and understood as a self-contained recreational event — that it can be conceived of as being pleasurable. When it’s not a recreational routine, when it’s performed out in the world, as part of everyday life, then driving, in the Valley view, can only be understood within the context of another formalized realm of experience: that of productive busyness. Every experience has to be cleanly defined, has to be categorized. There’s a place and a time for recreation, and there’s a place and a time for productivity.

This discomfort with the informal, with experience that is psychologically unbounded, that flits between and beyond categories, can be felt in a lot of the Valley’s consumer goods and services. Many personal apps and gadgets have the effect, or at least the intended effect, of formalizing informal activities. Once you strap on a Fitbit, you transform what might have been a pleasant walk in the park into a program of physical therapy. A passing observation that once might have earned a few fleeting smiles or shrugs before disappearing into the ether is now, thanks to the distribution systems of Facebook and Twitter, encapsulated as a product and subjected to formal measurement; every remark gets its own Nielsen rating.

What’s the source of this crabbed view of experience? I’m not sure. It may be an expression of a certain personality type. It may be a sign of the market’s continuing colonization of the quotidian. I’d guess it also has something to do with the rigorously formal qualities of programming itself. The universality of the digital computer ends — comes to a crashing halt, in fact — where informality begins.

Image: Burt and Sally mix their pleasures in “Smokey and the Bandit.”

15 Comments

Filed under Uncategorized

History and economics, simplified

Cub economist Marc Andreessen has been thinking again:

simple

I understand that data doesn’t explain everything, but in this particular case I would really, really like to see the data that Marc has assembled to back up his argument.

9 Comments

Filed under Uncategorized

A litmus test for technology critics

needle

Evgeny Morozov has written, in The Baffler, a critique of technology criticism in the guise of a review of my book The Glass Cage. He makes many important points, as he always does — Morozov’s intellect is admirably fierce and hungry — but his conclusions about the nature and value of technology criticism are wrong-headed, their implications pernicious. Morozov wants to narrow the sights of such criticism, to declare as invalid any critical approach that isn’t consistent with his own. He wants to establish an ideological litmus test for technology critics. As someone who approaches the question of technology, of tool-making and tool use, from a very different angle than the one Morozov takes, I’d like to respond with a word or two in defense of an open and pluralistic approach to technology criticism, rather than the closed and doctrinaire approach that Morozov advocates.

First, let me deal quickly with the personal barbs that are one of the hallmarks of Morozov’s brand of criticism. At one point in his review, he writes, “Carr doesn’t try very hard to engage his opponents.” This odd remark — there are plenty of people who disagree with me, but I hardly see them as “opponents” — says much about Morozov’s psychology as a critic. He is always trying hard — very, very hard — to “engage his opponents.” You sense, in reading his work, that there is a hot, sweaty wrestling match forever playing out in his mind, and that he can’t stop glancing up at the scoreboard to see where things stand. No opportunity to score a point goes unexploited. It’s this wrestling-match mentality that explains Morozov’s tactic of willfully distorting the ideas of other writers — his “opponents” — in order to make it easier for him to add to his score. The writer Steven Johnson has summed up Morozov’s modus operandi with precision: “He’s like a vampire slayer that has to keep planting capes and plastic fangs on his victims to stay in business.” With Morozov, a fierce intellect and a childish combativeness would seem to be two sides of the same personality, so it’s probably best to ignore the latter and concentrate on the former.

Morozov is disappointed that a “radical” political viewpoint is not more prominent in discussions of the role of technology in American culture. Technology criticism, at least in its popular form, concerns itself mainly, he says, with “what an ethos of permanent disruption means for the configuration of the liberal self or the survival of its landmark institutions, from universities to newspapers.” Radical political approaches to technology criticism, particularly those that place “technology, media, and communications within Marxist analytical frameworks,” go largely unnoticed. That certainly seems an accurate assessment. I think the same could be said about any popular discussion about any aspect of American culture. Marxist analytical frameworks, though prized in academia, cut no ice in the mainstream. Maybe that reflects a flaw in American culture. Maybe it reflects a flaw in the frameworks. Maybe it reflects a bit of both. It is in any case one symptom of the general narrowing of our public debates. In an earlier critique of technology criticism, published in Democracy, Henry Farrell argued that our intellectual life has in general been constrained by a highly competitive “attention economy” that pushes popular debates into a safe middle ground. To be radical, whether from the left or the right, is to be peripheral. The public intellectual has turned into an intellectual entrepreneur, selling a basket of bland ideas to a market of easily distracted consumers.

It’s easy, then, to agree with Morozov that there’s something lacking in contemporary American technology criticism. A broader discussion of technology, one that makes room for and indeed welcomes strongly political points of view, including all manner of radical ones that question the status quo, would enrich the conversation and perhaps give it a greater practical force. Except that that’s not really what Morozov wants. Morozov has come to believe that the only valid technology criticism is political criticism. In fact, he believes that the only valid technology criticism is political criticism that shares his own particular ideology. “Today,” he writes at a crucial juncture in his review, “it’s obvious to me that technology criticism, uncoupled from any radical project of social transformation, simply doesn’t have the goods.” The only critics fit to answer the hard questions about technology are those “who haven’t yet lost the ability to think in non-market and non-statist terms.” Only those with a “progressive agenda,” with an “emancipatory political vision,” can pass the litmus test; every one else is ideologically suspect. Morozov, always eager to point out any definitional fuzziness in other people’s vocabularies, doesn’t bother to define precisely what he means by radicalism or progressivism or social transformation or an emancipatory political vision. One assumes, though, that he will be the judge of what is legitimate and what is not. Anyone who doesn’t toe the Morozov line will be branded either a “romantic” or a “conservative” and hence deemed unfit to have a voice in the conversation. “Carr is extremely murky on his own [politics],” Morozov declares at one point, casting aspersion in the form of suspicion.

What particularly galls Morozov is any phenomenological critique of technology, any critical approach that begins by examining the way that the tools people use shape their actual experience of life — their behavior, their perceptions, their thoughts, their relations with others and with the world. The entire tradition of such criticism, a rich and vital tradition that I’m proud to be a part of, is anathema to him, a mere distraction from the political. Just as Morozov ignores my discussion of the politics of progress in The Glass Cage (to acknowledge it might complicate his argument), he blinds himself to the political implications of the work of the phenomenological philosophers. To explore the personal effects of tool use is, he contends, just an elitist waste of time, a frivolous bourgeois pursuit that he reduces to “poring over the texts of some ponderous French or German philosopher.” What we don’t understand we hold in contempt. Because he has no interest in the individual except as an elemental political particle, not a being but an abstraction, Morozov concludes that technology criticism is a zero-sum game. Any time spent on the personal is time not spent on the political. And so, in place of pluralistic inquiry, we get dogmatism. An argument that might have been expansive and welcoming becomes a sneering and self-aggrandizing exercise in exclusion.

“Goodbye to all that,” Morozov writes, grandly dismissing all of his own earlier work that does not fit neatly into his new analytical frameworks. Apparently, he has recently experienced a political awakening. It’s a shame, though not a surprise, that it has come at the cost of an open mind.

Photo: Juan Ramon Martos.

14 Comments

Filed under Uncategorized

The mosquito

coil

Every time I convince myself that I’ve disabled all sources of automated notifications on all devices, something slips through. The latest was from Twitter, and it took this form:

@soandso retweeted one of your Retweets!

I deleted it with the same alacrity I show in swatting a mosquito about to plunge its proboscis into the capillaries of my forearm. The notification was there, and then it wasn’t there — just the after-image twinkling through the neurons of my visual cortex, hardening into memory.

Although I appreciate Twitter’s fussy approach to capitalization (if not the girl-scout eagerness of its terminal punctuation), the banality of the message strikes me as fundamental:

@soandso retweeted one of your Retweets!

One feels, in reading this, or glancing hungrily across its narrow expanse, as one does with notifications, that one has come to the bedrock of social media. And, more or less by definition, when one comes to the bedrock of social media, one also comes to the bedrock of vanity. Here at last one knows exactly where one stands.

How many times in the course of a day, I wondered, does a notification in this or some similar retweet-of-a-retweet form elbow its way — such tiny elbows! — onto the screen of a device? Is the number in the billions? It must be in the billions. If I close my eyes, I can actually feel the weight of them all. It feels soft, like a down pillow.

@soandso retweeted one of your Retweets!

“Man hands on misery to man,” Philip Larkin wrote. “It deepens like a coastal shelf.” I would be overreaching if I were to suggest that this trifling message, this near-nothingness of intraspecies communication, this wisp of flattery, is a unit of misery. But the melancholy image of the deepening coastal shelf seems apt: all those grains of sand swirling downward through the water and gently coming to rest on the sea floor.

Have I mixed my geological metaphors? So be it. The world shapes itself to our thoughts.

Our intentions? That’s a different matter.

You think you were quick enough. You think you swatted the mosquito before it pierced your skin. And yet now you see the small, pink welt rising on your forearm, and you know that in a matter of seconds you will be scratching it and that there will be pleasure in that act. For vanity is the strongest of forces.

Thank you, @soandso. Thank you for retweeting one of my Retweets. Thank you for your part in it. That you for adding a little something to the whole that is never whole.

Photo: Jo Naylor.

Comments Off

Filed under Uncategorized

A symbiosis of surveillance

camera obscura

“Morals reformed—health preserved—industry invigorated—instruction diffused—public burthens lightened—Economy seated, as it were, upon a rock—all by a simple idea in Architecture!” —Jeremy Bentham

“Could this spell the end for speeding tickets?” asks Ford Motor Company’s UK arm as it introduces Intelligent Speed Limiter, an automotive system that prevents drivers from speeding:

The system monitors road signs with a camera mounted on the windscreen, and slows the vehicle as required. As the speed limit rises, the system allows the driver to accelerate up to the set speed — providing it does not exceed the new limit.

“Drivers are not always conscious of speeding and sometimes only becoming aware they were going too fast [sic] when they receive a fine in the mail or are pulled over by law enforcement,” said Stefan Kappes, active safety supervisor, Ford of Europe. “Intelligent Speed Limiter can remove one of the stresses of driving, helping ensure customers remain within the legal speed limit.”

The Register’s Simon Rockman fills in the technical details:

The Intelligent Speed Limiter combines current Ford technologies: the Adjustable Speed Limiter and Traffic Sign Recognition … At speeds of between 20mph and 120mph the system smoothly decelerates by restricting the fuel supplied to the engine, rather than applying the brakes. Should travelling downhill cause the vehicle to exceed the legislated speed an alarm is sounded. The limiter also communicates with the on-board navigation system to help accurately maintain the appropriate maximum speed when distances between speed limit signs are greater, for example on long country roads.

Britain has, of course, been a leader in the automated enforcement of traffic laws, having installed radar-equipped cameras pretty much everywhere. Intelligent Speed Limiter closes the loop between enforcing the law and obeying the law. One camera keeps tabs on you; another makes sure you stick to the straight and narrow. And that means you can relax, like a baby in a Snugli.

As Rockman explains, “the Ford tech is fighting automatic regulation with automatic adherence.” But “fighting” doesn’t seem like quite the right verb. It’s more of a warm, seamless, symbiotic embrace between surveillance and response, with the stress-inducing vagaries of personal choice removed from the equation. One can imagine all sorts of applications of such closed-loop enforcement systems as the internet of things becomes universal.

Photo: Jon Lewis.

Comments Off

Filed under Uncategorized

First, kill all the artisans

houses

“The built environment is an $8 trillion per year industry that is still basically artisanal.” So said Astro Teller, head of the Google X research lab, during a speech at South by Southwest last week. Reading that sentence in isolation, you might assume that Teller intended it as praise, that he was was applauding the field of architecture for maintaining its heritage of craftsmanship, skill, and artistry. But you would be wrong. Being “still basically artisanal” is, for Teller, a great flaw. It’s a symptom of both a debilitating lack of software-mediated routinization and a tragic superfluity of quirky human talent. Artisanality is a problem that Google is seeking to solve. One Google X project, Teller explained, is intended “to fix the way buildings are designed and built by building, basically, an expert system, a software Genie if you will, that could take your needs for the building and design the building for you.” By getting all those messy and outmoded artisans out of the picture, replacing them with tidy software algorithms, we’ll be able to avoid the inefficiency and waste that inevitably accompany human effort.

But, Teller went on to say, the Genie project has run into a problem: “We found out that the system we envisioned couldn’t connect into the infrastructure and ecosystems for building the built environment because that software infrastructure is piecemeal and often not software at all but just knowledge trapped in the heads of the experts in the field.” Let me repeat that last bit: “not software at all but just knowledge trapped in the heads of the experts in the field.” Quelle horreur! The goal now, he said, is to take “a huge step back” and lay “a software foundation and data layer” that will allow Google to liberate all that head-imprisoned knowledge and eradicate the pestilence of artistry once and for all.

Image: Mark Moz.

13 Comments

Filed under Uncategorized

Our algorithms, ourselves

windup

An earlier version of this essay appeared last year, under the headline “The Manipulators,” in the Los Angeles Review of Books.

Since the launch of Netscape and Yahoo twenty years ago, the story of the internet has been one of new companies and new products, a story shaped largely by the interests of entrepreneurs and venture capitalists. The plot has been linear; the pace, relentless. In 1995 came Amazon and Craigslist; in 1997, Google and Netflix; in 1999, Napster and Blogger; in 2001, iTunes; in 2003, MySpace; in 2004, Facebook; in 2005, YouTube; in 2006, Twitter; in 2007, the iPhone and the Kindle; in 2008, Airbnb; in 2010, Instagram and Uber; in 2011, Snapchat; in 2012, Coursera; in 2013, Tinder. It has been a carnival ride, and we, the public, have been the giddy passengers.

The story may be changing now. Though the current remains swift, eddies are appearing in the stream. Last year, the big news about the net came not in the form of buzzy startups or cool gadgets, but in the shape of two dry, arcane documents. One was a scientific paper describing an experiment in which researchers attempted to alter the moods of Facebook users by secretly manipulating the messages they saw. The other was a ruling by the European Union’s highest court granting citizens the right to have outdated or inaccurate information about them erased from Google and other search engines. Both documents provoked consternation, anger, and argument. Both raised important, complicated issues without resolving them. Arriving in the wake of Edward Snowden’s revelations about the NSA’s online spying operation, both seemed to herald, in very different ways, a new stage in the net’s history — one in which the public will be called upon to guide the technology, rather than the other way around. We may look back on 2014 as the year the internet began to grow up.

* * *

The Facebook study seemed fated to stir up controversy. Its title read like a bulletin from a dystopian future: Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks. But when, on June 2, 2014, the article first appeared on the website of the Proceedings of the National Academy of Sciences (PNAS), it drew little notice or comment. It sank quietly into the vast swamp of academic publishing. That changed abruptly three weeks later, on June 26, when technology reporter Aviva Rutkin posted a brief account of the study on the website of New Scientist magazine. She noted that the research had been run by a Facebook employee, a social psychologist named Adam Kramer who worked in the firm’s large Data Science unit, and that it had involved more than half a million members of the social network. Smelling a scandal, other journalists rushed to the PNAS site to give the paper a read. They discovered that Facebook had not bothered to inform its members about their participation in the experiment, much less ask their consent.

Outrage ensued, as the story pinballed through the media. “If you were still unsure how much contempt Facebook has for its users,” declared the technology news site PandoDaily, “this will make everything hideously clear.” A New York Times writer accused Facebook of treating people like “lab rats,” while The Washington Post, in an editorial, criticized the study for “cross[ing] an ethical line.” US Senator Mark Warner called on the Federal Trade Commission to investigate the matter, and at least two European governments opened probes. The response from social media was furious. “Get off Facebook,” tweeted Erin Kissane, an editor at a software site. “If you work there, quit. They’re fucking awful.” Writing on Google Plus, the privacy activist Lauren Weinstein wondered whether Facebook “KILLED anyone with their emotion manipulation stunt.”

The ethical concerns were justified. Although Facebook, as a private company, is not bound by the informed-consent guidelines of universities and government agencies, its decision to carry out psychological research on people without telling them was at best rash and at worst reprehensible. It violated the US Department of Health & Human Services’ policy for the protection of human research subjects (known as the “Common Rule”) as well as the ethics code of the American Psychological Association. Making the transgression all the more inexcusable was the company’s failure to exclude minors from the test group. The fact that the manipulation of information was carried out by the researchers’ computers rather than by the researchers themselves — a detail that Facebook offered in its defense — was beside the point. As University of Maryland law professor James Grimmelmann observed, psychological manipulation remains psychological manipulation “even when it’s carried out automatically.”

Still, the intensity of the reaction seemed incommensurate with its object. Once you got past the dubious ethics and the alarming title, the study turned out to be a meager piece of work. Earlier psychological research had suggested that moods, like sneezes, could be contagious. If you hang out with sad people, you’ll probably end up feeling a little blue yourself. Kramer and his collaborators (the paper was coauthored by two Cornell scientists) wanted to see if such emotional contagion might also be spread through online social networks. During a week in January 2012, they programmed Facebook’s News Feed algorithm — the program that selects which messages to funnel onto a member’s home page and which to omit — to make slight adjustments in the “emotional content” of the feeds delivered to a random sample of members. One group of test subjects saw a slightly higher number of “positive” messages than normal, while another group saw slightly more “negative” messages. To categorize messages as positive or negative, the researchers used a standard text-analysis program, called Linguistic Inquiry and Word Count, that spots words expressing emotions in written works. They then evaluated each subject’s subsequent Facebook posts to see whether the emotional content of the messages had been influenced by the alterations in the News Feed.

The researchers did discover an influence. People exposed to more negative words went on to use more negative words than would have been expected, while people exposed to more positive words used more of the same — but the effect was vanishingly small, measurable only in a tiny fraction of a percentage point. If the effect had been any more trifling, it would have been undetectable. As Kramer later explained, in a contrite Facebook post, “the actual impact on people in the experiment was the minimal amount to statistically detect it — the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.” As contagions go, that’s a pretty feeble one. It seems unlikely that any participant in the study suffered the slightest bit of harm. As Kramer admitted, “the research benefits of the paper may not have justified all of this anxiety.”

* * *

What was most worrisome about the study lay not in its design or its findings, but in its ordinariness. As Facebook made clear in its official responses to the controversy, Kramer’s experiment was just the visible tip of an enormous and otherwise well-concealed iceberg. In an email to the press, a company spokesperson said the PNAS study was part of the continuing research Facebook does to understand “how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow.” Sheryl Sandberg, the company’s chief operating officer, reinforced that message in a press conference: “This was part of ongoing research companies do to test different products, and that was what it was.” The only problem with the study, she went on, was that it “was poorly communicated.” A former member of Facebook’s Data Science unit, Andrew Ledvina, told The Wall Street Journal that the in-house lab operates with few restrictions. “Anyone on that team could run a test,” he said. “They’re always trying to alter people’s behavior.”

Businesses have been trying to alter people’s behavior for as long as businesses have been around. Marketing departments and advertising agencies are experts at formulating, testing, and disseminating images and words that provoke emotional responses, shape attitudes, and trigger purchases. From the apple-cheeked Ivory Snow baby to the chiseled Marlboro man to the moon-eyed Cialis couple, we have for decades been bombarded by messages intended to influence our feelings. The Facebook study is part of that venerable tradition, a fact that the few intrepid folks who came forward to defend the experiment often emphasized. “We are being manipulated without our knowledge or consent all the time — by advertisers, marketers, politicians — and we all just accept that as a part of life,” argued Duncan Watts, a researcher who studies online behavior for Microsoft. “Marketing as a whole is designed to manipulate emotions,” said Nicholas Christakis, a Yale sociologist who has used Facebook data in his own research.

The “everybody does it” excuse is rarely convincing, and in this case it’s specious. Thanks to the reach of the internet, the kind of psychological and behavioral testing that Facebook does is different in both scale and kind from the market research of the past. Never before have companies been able to gather such intimate data on people’s thoughts and lives, and never before have they been able to so broadly and minutely shape the information that people see. If the Post Office had ever disclosed that it was reading everyone’s mail and choosing which letters to deliver and which not to, people would have been apoplectic, yet that is essentially what Facebook has been doing. In formulating the algorithms that run its News Feed and other media services, it molds what its billion-plus members see and then tracks their responses. It uses the resulting data to further adjust its algorithms, and the cycle of experiments begins anew. Because the algorithms are secret, people have no idea which of their buttons are being pushed — or when, or why.

Facebook is hardly unique. Pretty much every internet company performs extensive experiments on its users, trying to figure out, among other things, how to increase the time they spend using an app or a site, or how to increase the likelihood they will click on an advertisement or a link. Much of this research is innocuous. Google once tested 41 different shades of blue on a web-page toolbar to determine which color would produce the most clicks. But not all of it is innocuous. You don’t have to be paranoid to conclude that the PNAS test was far from the most manipulative of the experiments going on behind the scenes at internet companies. You only have to be sensible.

That became clear, in the midst of the Facebook controversy, when another popular web operation, the matchmaking site OKCupid, disclosed that it routinely conducts psychological research in which it doctors the information it provides to its love-seeking clientele. It has, for instance, done experiments in which it altered people’s profile pictures and descriptions. It has even circulated false “compatibility ratings” to see what happens when ill-matched strangers believe they’ll be well-matched couples. OKCupid was not exactly contrite about abusing its customers’ trust. “Guess what, everybody,” blogged the company’s cofounder, Christian Rudder: “if you use the internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”

The problem with manipulation is that it hems us in. It weakens our volition and circumscribes our will, substituting the intentions of others for our own. When efforts to manipulate us are hidden from us, the likelihood that we’ll fall victim to them grows. Other than the dim or gullible, most people in the past understood that corporate marketing tactics, from advertisements to celebrity endorsements to package designs, were intended to be manipulative. As long as those tactics were visible, we could evaluate them and resist them — maybe even make jokes about them. That’s no longer the case, at least not when it comes to online services. When companies wield moment-by-moment control over the flow of personal correspondence and other intimate or sensitive information, tweaking it in ways that are concealed from us, we’re unable to discern, much less evaluate, the manipulative acts. We find ourselves inside a black box.

* * *

Put yourself in the shoes of Mario Costeja González. In 1998, the Spaniard ran into a little financial difficulty. He had defaulted on a debt, and to pay it off he was forced to put some real estate up for auction. The sale was duly noted in the venerable Barcelona newspaper La Vanguardia. The matter settled, Costeja González went on with his life as a graphologist, an interpreter of handwriting. The debt and the auction, as well as the 36-word press notice about them, faded from public memory. The bruise healed.

But then, in 2009, nearly a dozen years later, the episode sprang back to life. La Vanguardia put its archives online, Google’s web-crawling “bot” sniffed out the old article about the auction, the article was automatically added to the search engine’s database, and a link to it began popping into prominent view whenever someone in Spain did a search on Costeja’s name. Costeja was dismayed. It seemed unfair to have his reputation sullied by an out-of-context report on an old personal problem that had long ago been resolved. Presented without explanation in search results, the article made him look like a deadbeat. He felt, as he would later explain, that his dignity was at stake.

Costeja lodged a formal complaint with the Spanish government’s data-protection agency. He asked the regulators to order La Vanguardia to remove the article from its website and to order Google to stop linking to the notice in its search results. The agency refused to act on the newspaper request, citing the legality of the article’s original publication, but it agreed with Costeja about the unfairness of the Google listing. It told the company to remove the auction story from its results. Appalled, Google appealed the decision, arguing that in listing the story it was merely highlighting information published elsewhere. The dispute quickly made its way to the Court of Justice of the European Union in Luxembourg, where it became known as the “right to be forgotten” case. On May 13 of 2014, the high court issued its decision. Siding with Costeja and the Spanish data-protection agency, the justices ruled that Google was obligated to obey the order and remove the La Vanguardia piece from its search results. The upshot: European citizens suddenly had the right to get certain unflattering information about them deleted from search engines.

Most Americans, and quite a few Europeans, were flabbergasted by the decision. They saw it not only as unworkable (how can a global search engine processing some six billion searches a day be expected to evaluate the personal grouses of individuals?), but also as a threat to the free flow of information online. Many accused the court of licensing censorship or even of creating “memory holes” in history.

But the heated reactions, however understandable, were off the mark. They reflected a misinterpretation of the decision. The court had not established a “right to be forgotten.” That essentially metaphorical phrase is mentioned only in passing in the ruling, and its attachment to the case has proven a distraction. In an open society, where freedom of thought and speech are protected, where people’s thoughts and words are their own, a right to be forgotten is as untenable as a right to be remembered. What the case was really about was an individual’s right not to be systematically misrepresented. But even putting the decision into those more modest terms is misleading. It implies that the court’s ruling was broader than it actually was.

The essential issue the justices were called upon to address was how, if at all, a 1995 European Union policy on the processing of personal data, the so-called Data Protection Directive, applied to companies that, like Google, engage in the large-scale aggregation of information online. The directive had been enacted to ease the cross-border exchange of data, while also establishing privacy and other protections for citizens. “Whereas data-processing systems are designed to serve man,” the policy reads, “they must, whatever the nationality or residence of natural persons, respect their fundamental rights and freedoms, notably the right to privacy, and contribute to economic and social progress, trade expansion and the well-being of individuals.” To shield people from abusive or unjust treatment, the directive imposed strict regulations on businesses and other organizations that act as “controllers” of the processing of personal information. It required, among other things, that any data disseminated by such controllers be not only accurate and up-to-date, but fair, relevant, and “not excessive in relation to the purposes for which they are collected and/or further processed.” What the directive left unclear was whether companies that aggregated information produced by others — companies like Google and Facebook — fell into the category of controllers. That was what the court had to decide.

Search engines, social networks, and other online aggregators have always presented themselves as playing a neutral and essentially passive role when it comes to the processing of information. They’re not creating the content they distribute — that’s done by publishers in the case of search engines, or by individual members in the case of social networks. Rather, they’re simply gathering the information and arranging it in a useful form. This view, tirelessly promoted by Google — and used by the company as a defense in the Costeja case — has been embraced by much of the public. It has become the default view. When Wikipedia cofounder Jimmy Wales, in criticizing the European court’s decision, said, “Google just helps us to find the things that are online,” he was not only mouthing the company line; he was expressing the popular conception of information aggregators.

The court took a different view. Online aggregation is not a neutral act, it ruled, but a transformative one. In collecting, organizing, and ranking information, a search engine is creating something new: a distinctive and influential product that reflects the company’s own editorial intentions and judgments, as expressed through its information-processing algorithms. “The processing of personal data carried out in the context of the activity of a search engine can be distinguished from and is additional to that carried out by publishers of websites,” the justices wrote. “Inasmuch as the activity of a search engine is therefore liable to affect significantly […] the fundamental rights to privacy and to the protection of personal data, the operator of the search engine as the person determining the purposes and means of that activity must ensure, within the framework of its responsibilities, powers and capabilities, that the activity meets the requirements of [the Data Protection Directive] in order that the guarantees laid down by the directive may have full effect.”

The European court did not pass judgment on the guarantees established by the Data Protection Directive, nor on any other existing or prospective laws or policies pertaining to the processing of personal information. It did not tell society how to assess or regulate the activities of aggregators like Google or Facebook. It did not even offer an opinion as to the process companies or lawmakers should use in deciding which personal information warranted exclusion from search results — an undertaking every bit as thorny as it’s been made out to be. What the justices did, with perspicuity and prudence, was provide us with a way to think rationally about the algorithmic manipulation of digital information and the social responsibilities it entails. The interests of a powerful international company like Google, a company that provides an indispensable service to many people, do not automatically trump the interests of a lone individual. When it comes to the operation of search engines and other information aggregators, fairness is at least as important as expedience.

Ten months have passed since the court’s ruling, and we now know that the judgment is not going to “break the internet,” as was widely predicted when it was issued. The web still works. Google has a process in place for adjudicating requests for the removal of personal information — it accepts about forty percent of them — just as it has a process in place for adjudicating requests to remove copyrighted information. Last month, Google’s Advisory Council on the Right to Be Forgotten issued a report that put the ruling and the company’s response into context. “In fact,” the council wrote, “the Ruling does not establish a general Right to to Be Forgotten. Implementation of the Ruling does not have the effect of ‘forgetting’ information about a data subject. Instead, it requires Google to remove links returned in search results based on an individual’s name when those results are ‘inadequate, irrelevant or no longer relevant, or excessive.’ Google is not required to remove those results if there is an overriding public interest in them ‘for particular reasons, such as the role played by the data subject in public life.'” It is possible, in other words, to strike a reasonable balance between an individual’s interests, the interests of the public in finding information quickly, and the commercial interests of internet companies.

* * *

We have had a hard time thinking clearly about companies like Google and Facebook because we have never before had to deal with companies like Google and Facebook. They are something new in the world, and they don’t fit neatly into our existing legal and cultural templates. Because they operate at such unimaginable magnitude, carrying out millions of informational transactions every second, we’ve tended to think of them as vast, faceless, dispassionate computers — as information-processing machines that exist outside the realm of human intention and control. That’s a misperception, and a dangerous one.

Modern computers and computer networks enable human judgment to be automated, to be exercised on a vast scale and at a breathtaking pace. But it’s still human judgment. Algorithms are constructed by people, and they reflect the interests, biases, and flaws of their makers. As Google’s founders themselves pointed out many years ago, an information aggregator operated for commercial gain will inevitably be compromised and should always be treated with suspicion. That is certainly true of a search engine that mediates our intellectual explorations; it is even more true of a social network that mediates our personal associations and conversations.

Because algorithms impose on us the interests and biases of others, we have a right and an obligation to carefully examine and, when appropriate, judiciously regulate those algorithms. We have a right and an obligation to understand how we, and our information, are being manipulated. To ignore that responsibility, or to shirk it because it raises hard problems, is to grant a small group of people — the kind of people who carried out the Facebook and OKCupid experiments — the power to play with us at their whim.

Image: Emily Hummel.

Comments Off

Filed under Uncategorized