A symbiosis of surveillance

camera obscura

“Morals reformed—health preserved—industry invigorated—instruction diffused—public burthens lightened—Economy seated, as it were, upon a rock—all by a simple idea in Architecture!” —Jeremy Bentham

“Could this spell the end for speeding tickets?” asks Ford Motor Company’s UK arm as it introduces Intelligent Speed Limiter, an automotive system that prevents drivers from speeding:

The system monitors road signs with a camera mounted on the windscreen, and slows the vehicle as required. As the speed limit rises, the system allows the driver to accelerate up to the set speed — providing it does not exceed the new limit.

“Drivers are not always conscious of speeding and sometimes only becoming aware they were going too fast [sic] when they receive a fine in the mail or are pulled over by law enforcement,” said Stefan Kappes, active safety supervisor, Ford of Europe. “Intelligent Speed Limiter can remove one of the stresses of driving, helping ensure customers remain within the legal speed limit.”

The Register’s Simon Rockman fills in the technical details:

The Intelligent Speed Limiter combines current Ford technologies: the Adjustable Speed Limiter and Traffic Sign Recognition … At speeds of between 20mph and 120mph the system smoothly decelerates by restricting the fuel supplied to the engine, rather than applying the brakes. Should travelling downhill cause the vehicle to exceed the legislated speed an alarm is sounded. The limiter also communicates with the on-board navigation system to help accurately maintain the appropriate maximum speed when distances between speed limit signs are greater, for example on long country roads.

Britain has, of course, been a leader in the automated enforcement of traffic laws, having installed radar-equipped cameras pretty much everywhere. Intelligent Speed Limiter closes the loop between enforcing the law and obeying the law. One camera keeps tabs on you; another makes sure you stick to the straight and narrow. And that means you can relax, like a baby in a Snugli.

As Rockman explains, “the Ford tech is fighting automatic regulation with automatic adherence.” But “fighting” doesn’t seem like quite the right verb. It’s more of a warm, seamless, symbiotic embrace between surveillance and response, with the stress-inducing vagaries of personal choice removed from the equation. One can imagine all sorts of applications of such closed-loop enforcement systems as the internet of things becomes universal.

Photo: Jon Lewis.

First, kill all the artisans

houses

“The built environment is an $8 trillion per year industry that is still basically artisanal.” So said Astro Teller, head of the Google X research lab, during a speech at South by Southwest last week. Reading that sentence in isolation, you might assume that Teller intended it as praise, that he was was applauding the field of architecture for maintaining its heritage of craftsmanship, skill, and artistry. But you would be wrong. Being “still basically artisanal” is, for Teller, a great flaw. It’s a symptom of both a debilitating lack of software-mediated routinization and a tragic superfluity of quirky human talent. Artisanality is a problem that Google is seeking to solve. One Google X project, Teller explained, is intended “to fix the way buildings are designed and built by building, basically, an expert system, a software Genie if you will, that could take your needs for the building and design the building for you.” By getting all those messy and outmoded artisans out of the picture, replacing them with tidy software algorithms, we’ll be able to avoid the inefficiency and waste that inevitably accompany human effort.

But, Teller went on to say, the Genie project has run into a problem: “We found out that the system we envisioned couldn’t connect into the infrastructure and ecosystems for building the built environment because that software infrastructure is piecemeal and often not software at all but just knowledge trapped in the heads of the experts in the field.” Let me repeat that last bit: “not software at all but just knowledge trapped in the heads of the experts in the field.” Quelle horreur! The goal now, he said, is to take “a huge step back” and lay “a software foundation and data layer” that will allow Google to liberate all that head-imprisoned knowledge and eradicate the pestilence of artistry once and for all.

Image: Mark Moz.

Our algorithms, ourselves

windup

An earlier version of this essay appeared last year, under the headline “The Manipulators,” in the Los Angeles Review of Books.

Since the launch of Netscape and Yahoo twenty years ago, the story of the internet has been one of new companies and new products, a story shaped largely by the interests of entrepreneurs and venture capitalists. The plot has been linear; the pace, relentless. In 1995 came Amazon and Craigslist; in 1997, Google and Netflix; in 1999, Napster and Blogger; in 2001, iTunes; in 2003, MySpace; in 2004, Facebook; in 2005, YouTube; in 2006, Twitter; in 2007, the iPhone and the Kindle; in 2008, Airbnb; in 2010, Instagram and Uber; in 2011, Snapchat; in 2012, Coursera; in 2013, Tinder. It has been a carnival ride, and we, the public, have been the giddy passengers.

The story may be changing now. Though the current remains swift, eddies are appearing in the stream. Last year, the big news about the net came not in the form of buzzy startups or cool gadgets, but in the shape of two dry, arcane documents. One was a scientific paper describing an experiment in which researchers attempted to alter the moods of Facebook users by secretly manipulating the messages they saw. The other was a ruling by the European Union’s highest court granting citizens the right to have outdated or inaccurate information about them erased from Google and other search engines. Both documents provoked consternation, anger, and argument. Both raised important, complicated issues without resolving them. Arriving in the wake of Edward Snowden’s revelations about the NSA’s online spying operation, both seemed to herald, in very different ways, a new stage in the net’s history — one in which the public will be called upon to guide the technology, rather than the other way around. We may look back on 2014 as the year the internet began to grow up.

* * *

The Facebook study seemed fated to stir up controversy. Its title read like a bulletin from a dystopian future: Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks. But when, on June 2, 2014, the article first appeared on the website of the Proceedings of the National Academy of Sciences (PNAS), it drew little notice or comment. It sank quietly into the vast swamp of academic publishing. That changed abruptly three weeks later, on June 26, when technology reporter Aviva Rutkin posted a brief account of the study on the website of New Scientist magazine. She noted that the research had been run by a Facebook employee, a social psychologist named Adam Kramer who worked in the firm’s large Data Science unit, and that it had involved more than half a million members of the social network. Smelling a scandal, other journalists rushed to the PNAS site to give the paper a read. They discovered that Facebook had not bothered to inform its members about their participation in the experiment, much less ask their consent.

Outrage ensued, as the story pinballed through the media. “If you were still unsure how much contempt Facebook has for its users,” declared the technology news site PandoDaily, “this will make everything hideously clear.” A New York Times writer accused Facebook of treating people like “lab rats,” while The Washington Post, in an editorial, criticized the study for “cross[ing] an ethical line.” US Senator Mark Warner called on the Federal Trade Commission to investigate the matter, and at least two European governments opened probes. The response from social media was furious. “Get off Facebook,” tweeted Erin Kissane, an editor at a software site. “If you work there, quit. They’re fucking awful.” Writing on Google Plus, the privacy activist Lauren Weinstein wondered whether Facebook “KILLED anyone with their emotion manipulation stunt.”

The ethical concerns were justified. Although Facebook, as a private company, is not bound by the informed-consent guidelines of universities and government agencies, its decision to carry out psychological research on people without telling them was at best rash and at worst reprehensible. It violated the US Department of Health & Human Services’ policy for the protection of human research subjects (known as the “Common Rule”) as well as the ethics code of the American Psychological Association. Making the transgression all the more inexcusable was the company’s failure to exclude minors from the test group. The fact that the manipulation of information was carried out by the researchers’ computers rather than by the researchers themselves — a detail that Facebook offered in its defense — was beside the point. As University of Maryland law professor James Grimmelmann observed, psychological manipulation remains psychological manipulation “even when it’s carried out automatically.”

Still, the intensity of the reaction seemed incommensurate with its object. Once you got past the dubious ethics and the alarming title, the study turned out to be a meager piece of work. Earlier psychological research had suggested that moods, like sneezes, could be contagious. If you hang out with sad people, you’ll probably end up feeling a little blue yourself. Kramer and his collaborators (the paper was coauthored by two Cornell scientists) wanted to see if such emotional contagion might also be spread through online social networks. During a week in January 2012, they programmed Facebook’s News Feed algorithm — the program that selects which messages to funnel onto a member’s home page and which to omit — to make slight adjustments in the “emotional content” of the feeds delivered to a random sample of members. One group of test subjects saw a slightly higher number of “positive” messages than normal, while another group saw slightly more “negative” messages. To categorize messages as positive or negative, the researchers used a standard text-analysis program, called Linguistic Inquiry and Word Count, that spots words expressing emotions in written works. They then evaluated each subject’s subsequent Facebook posts to see whether the emotional content of the messages had been influenced by the alterations in the News Feed.

The researchers did discover an influence. People exposed to more negative words went on to use more negative words than would have been expected, while people exposed to more positive words used more of the same — but the effect was vanishingly small, measurable only in a tiny fraction of a percentage point. If the effect had been any more trifling, it would have been undetectable. As Kramer later explained, in a contrite Facebook post, “the actual impact on people in the experiment was the minimal amount to statistically detect it — the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.” As contagions go, that’s a pretty feeble one. It seems unlikely that any participant in the study suffered the slightest bit of harm. As Kramer admitted, “the research benefits of the paper may not have justified all of this anxiety.”

* * *

What was most worrisome about the study lay not in its design or its findings, but in its ordinariness. As Facebook made clear in its official responses to the controversy, Kramer’s experiment was just the visible tip of an enormous and otherwise well-concealed iceberg. In an email to the press, a company spokesperson said the PNAS study was part of the continuing research Facebook does to understand “how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow.” Sheryl Sandberg, the company’s chief operating officer, reinforced that message in a press conference: “This was part of ongoing research companies do to test different products, and that was what it was.” The only problem with the study, she went on, was that it “was poorly communicated.” A former member of Facebook’s Data Science unit, Andrew Ledvina, told The Wall Street Journal that the in-house lab operates with few restrictions. “Anyone on that team could run a test,” he said. “They’re always trying to alter people’s behavior.”

Businesses have been trying to alter people’s behavior for as long as businesses have been around. Marketing departments and advertising agencies are experts at formulating, testing, and disseminating images and words that provoke emotional responses, shape attitudes, and trigger purchases. From the apple-cheeked Ivory Snow baby to the chiseled Marlboro man to the moon-eyed Cialis couple, we have for decades been bombarded by messages intended to influence our feelings. The Facebook study is part of that venerable tradition, a fact that the few intrepid folks who came forward to defend the experiment often emphasized. “We are being manipulated without our knowledge or consent all the time — by advertisers, marketers, politicians — and we all just accept that as a part of life,” argued Duncan Watts, a researcher who studies online behavior for Microsoft. “Marketing as a whole is designed to manipulate emotions,” said Nicholas Christakis, a Yale sociologist who has used Facebook data in his own research.

The “everybody does it” excuse is rarely convincing, and in this case it’s specious. Thanks to the reach of the internet, the kind of psychological and behavioral testing that Facebook does is different in both scale and kind from the market research of the past. Never before have companies been able to gather such intimate data on people’s thoughts and lives, and never before have they been able to so broadly and minutely shape the information that people see. If the Post Office had ever disclosed that it was reading everyone’s mail and choosing which letters to deliver and which not to, people would have been apoplectic, yet that is essentially what Facebook has been doing. In formulating the algorithms that run its News Feed and other media services, it molds what its billion-plus members see and then tracks their responses. It uses the resulting data to further adjust its algorithms, and the cycle of experiments begins anew. Because the algorithms are secret, people have no idea which of their buttons are being pushed — or when, or why.

Facebook is hardly unique. Pretty much every internet company performs extensive experiments on its users, trying to figure out, among other things, how to increase the time they spend using an app or a site, or how to increase the likelihood they will click on an advertisement or a link. Much of this research is innocuous. Google once tested 41 different shades of blue on a web-page toolbar to determine which color would produce the most clicks. But not all of it is innocuous. You don’t have to be paranoid to conclude that the PNAS test was far from the most manipulative of the experiments going on behind the scenes at internet companies. You only have to be sensible.

That became clear, in the midst of the Facebook controversy, when another popular web operation, the matchmaking site OKCupid, disclosed that it routinely conducts psychological research in which it doctors the information it provides to its love-seeking clientele. It has, for instance, done experiments in which it altered people’s profile pictures and descriptions. It has even circulated false “compatibility ratings” to see what happens when ill-matched strangers believe they’ll be well-matched couples. OKCupid was not exactly contrite about abusing its customers’ trust. “Guess what, everybody,” blogged the company’s cofounder, Christian Rudder: “if you use the internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”

The problem with manipulation is that it hems us in. It weakens our volition and circumscribes our will, substituting the intentions of others for our own. When efforts to manipulate us are hidden from us, the likelihood that we’ll fall victim to them grows. Other than the dim or gullible, most people in the past understood that corporate marketing tactics, from advertisements to celebrity endorsements to package designs, were intended to be manipulative. As long as those tactics were visible, we could evaluate them and resist them — maybe even make jokes about them. That’s no longer the case, at least not when it comes to online services. When companies wield moment-by-moment control over the flow of personal correspondence and other intimate or sensitive information, tweaking it in ways that are concealed from us, we’re unable to discern, much less evaluate, the manipulative acts. We find ourselves inside a black box.

* * *

Put yourself in the shoes of Mario Costeja González. In 1998, the Spaniard ran into a little financial difficulty. He had defaulted on a debt, and to pay it off he was forced to put some real estate up for auction. The sale was duly noted in the venerable Barcelona newspaper La Vanguardia. The matter settled, Costeja González went on with his life as a graphologist, an interpreter of handwriting. The debt and the auction, as well as the 36-word press notice about them, faded from public memory. The bruise healed.

But then, in 2009, nearly a dozen years later, the episode sprang back to life. La Vanguardia put its archives online, Google’s web-crawling “bot” sniffed out the old article about the auction, the article was automatically added to the search engine’s database, and a link to it began popping into prominent view whenever someone in Spain did a search on Costeja’s name. Costeja was dismayed. It seemed unfair to have his reputation sullied by an out-of-context report on an old personal problem that had long ago been resolved. Presented without explanation in search results, the article made him look like a deadbeat. He felt, as he would later explain, that his dignity was at stake.

Costeja lodged a formal complaint with the Spanish government’s data-protection agency. He asked the regulators to order La Vanguardia to remove the article from its website and to order Google to stop linking to the notice in its search results. The agency refused to act on the newspaper request, citing the legality of the article’s original publication, but it agreed with Costeja about the unfairness of the Google listing. It told the company to remove the auction story from its results. Appalled, Google appealed the decision, arguing that in listing the story it was merely highlighting information published elsewhere. The dispute quickly made its way to the Court of Justice of the European Union in Luxembourg, where it became known as the “right to be forgotten” case. On May 13 of 2014, the high court issued its decision. Siding with Costeja and the Spanish data-protection agency, the justices ruled that Google was obligated to obey the order and remove the La Vanguardia piece from its search results. The upshot: European citizens suddenly had the right to get certain unflattering information about them deleted from search engines.

Most Americans, and quite a few Europeans, were flabbergasted by the decision. They saw it not only as unworkable (how can a global search engine processing some six billion searches a day be expected to evaluate the personal grouses of individuals?), but also as a threat to the free flow of information online. Many accused the court of licensing censorship or even of creating “memory holes” in history.

But the heated reactions, however understandable, were off the mark. They reflected a misinterpretation of the decision. The court had not established a “right to be forgotten.” That essentially metaphorical phrase is mentioned only in passing in the ruling, and its attachment to the case has proven a distraction. In an open society, where freedom of thought and speech are protected, where people’s thoughts and words are their own, a right to be forgotten is as untenable as a right to be remembered. What the case was really about was an individual’s right not to be systematically misrepresented. But even putting the decision into those more modest terms is misleading. It implies that the court’s ruling was broader than it actually was.

The essential issue the justices were called upon to address was how, if at all, a 1995 European Union policy on the processing of personal data, the so-called Data Protection Directive, applied to companies that, like Google, engage in the large-scale aggregation of information online. The directive had been enacted to ease the cross-border exchange of data, while also establishing privacy and other protections for citizens. “Whereas data-processing systems are designed to serve man,” the policy reads, “they must, whatever the nationality or residence of natural persons, respect their fundamental rights and freedoms, notably the right to privacy, and contribute to economic and social progress, trade expansion and the well-being of individuals.” To shield people from abusive or unjust treatment, the directive imposed strict regulations on businesses and other organizations that act as “controllers” of the processing of personal information. It required, among other things, that any data disseminated by such controllers be not only accurate and up-to-date, but fair, relevant, and “not excessive in relation to the purposes for which they are collected and/or further processed.” What the directive left unclear was whether companies that aggregated information produced by others — companies like Google and Facebook — fell into the category of controllers. That was what the court had to decide.

Search engines, social networks, and other online aggregators have always presented themselves as playing a neutral and essentially passive role when it comes to the processing of information. They’re not creating the content they distribute — that’s done by publishers in the case of search engines, or by individual members in the case of social networks. Rather, they’re simply gathering the information and arranging it in a useful form. This view, tirelessly promoted by Google — and used by the company as a defense in the Costeja case — has been embraced by much of the public. It has become the default view. When Wikipedia cofounder Jimmy Wales, in criticizing the European court’s decision, said, “Google just helps us to find the things that are online,” he was not only mouthing the company line; he was expressing the popular conception of information aggregators.

The court took a different view. Online aggregation is not a neutral act, it ruled, but a transformative one. In collecting, organizing, and ranking information, a search engine is creating something new: a distinctive and influential product that reflects the company’s own editorial intentions and judgments, as expressed through its information-processing algorithms. “The processing of personal data carried out in the context of the activity of a search engine can be distinguished from and is additional to that carried out by publishers of websites,” the justices wrote. “Inasmuch as the activity of a search engine is therefore liable to affect significantly […] the fundamental rights to privacy and to the protection of personal data, the operator of the search engine as the person determining the purposes and means of that activity must ensure, within the framework of its responsibilities, powers and capabilities, that the activity meets the requirements of [the Data Protection Directive] in order that the guarantees laid down by the directive may have full effect.”

The European court did not pass judgment on the guarantees established by the Data Protection Directive, nor on any other existing or prospective laws or policies pertaining to the processing of personal information. It did not tell society how to assess or regulate the activities of aggregators like Google or Facebook. It did not even offer an opinion as to the process companies or lawmakers should use in deciding which personal information warranted exclusion from search results — an undertaking every bit as thorny as it’s been made out to be. What the justices did, with perspicuity and prudence, was provide us with a way to think rationally about the algorithmic manipulation of digital information and the social responsibilities it entails. The interests of a powerful international company like Google, a company that provides an indispensable service to many people, do not automatically trump the interests of a lone individual. When it comes to the operation of search engines and other information aggregators, fairness is at least as important as expedience.

Ten months have passed since the court’s ruling, and we now know that the judgment is not going to “break the internet,” as was widely predicted when it was issued. The web still works. Google has a process in place for adjudicating requests for the removal of personal information — it accepts about forty percent of them — just as it has a process in place for adjudicating requests to remove copyrighted information. Last month, Google’s Advisory Council on the Right to Be Forgotten issued a report that put the ruling and the company’s response into context. “In fact,” the council wrote, “the Ruling does not establish a general Right to to Be Forgotten. Implementation of the Ruling does not have the effect of ‘forgetting’ information about a data subject. Instead, it requires Google to remove links returned in search results based on an individual’s name when those results are ‘inadequate, irrelevant or no longer relevant, or excessive.’ Google is not required to remove those results if there is an overriding public interest in them ‘for particular reasons, such as the role played by the data subject in public life.'” It is possible, in other words, to strike a reasonable balance between an individual’s interests, the interests of the public in finding information quickly, and the commercial interests of internet companies.

* * *

We have had a hard time thinking clearly about companies like Google and Facebook because we have never before had to deal with companies like Google and Facebook. They are something new in the world, and they don’t fit neatly into our existing legal and cultural templates. Because they operate at such unimaginable magnitude, carrying out millions of informational transactions every second, we’ve tended to think of them as vast, faceless, dispassionate computers — as information-processing machines that exist outside the realm of human intention and control. That’s a misperception, and a dangerous one.

Modern computers and computer networks enable human judgment to be automated, to be exercised on a vast scale and at a breathtaking pace. But it’s still human judgment. Algorithms are constructed by people, and they reflect the interests, biases, and flaws of their makers. As Google’s founders themselves pointed out many years ago, an information aggregator operated for commercial gain will inevitably be compromised and should always be treated with suspicion. That is certainly true of a search engine that mediates our intellectual explorations; it is even more true of a social network that mediates our personal associations and conversations.

Because algorithms impose on us the interests and biases of others, we have a right and an obligation to carefully examine and, when appropriate, judiciously regulate those algorithms. We have a right and an obligation to understand how we, and our information, are being manipulated. To ignore that responsibility, or to shirk it because it raises hard problems, is to grant a small group of people — the kind of people who carried out the Facebook and OKCupid experiments — the power to play with us at their whim.

Image: Emily Hummel.

Varieties of friendship

mailboxes

I contributed to the latest New York Times “Room for Debate” discussion, which posed this question: “Can real relationships be forged between people who never meet? Do online-only friendships count?” Here’s my reply, slightly expanded from what appeared in the Times:

“No kinds of love,” sang Lou Reed in his Velvet Underground days, “are better than others.” There’s wisdom as well as kindness in that line. Only the mean of spirit would seek to redline certain varieties of love or friendship — to claim that some human relationships “don’t count.” I have happy memories of exchanging letters with distant pen pals while in elementary school, and I recall with fondness the conversations I had with like-minded cyberians in America Online chatrooms in the early nineties. Life is lonely; all connections have value.

That doesn’t mean that all connections are the same. If it’s odious to dismiss online friendships as invalid, it’s naive to pretend that there are no distinctions in quality between friendships forged in person and those conducted from afar. An epistolary friendship is different from a telephonic friendship, and an email friendship is different from a Facebook friendship. And all of those mediated, or disembodied, friendships are different from embodied friendships, the ones established between persons who are in close enough proximity to actually touch each other.

The differences between virtual and embodied friendships come clearly into view at moments of transition, when an embodied friendship becomes a virtual one or vice versa. People who have built a friendship in person have little trouble continuing the friendship online when they’re separated. The friendship may eventually peter out — absence doesn’t always make the heart grow fonder — but the friends don’t feel any anxiety about exchanging messages through their phones or laptops.

Now think about what happens when people who have struck up friendships online finally get together in the physical world. The meetings are usually approached with nervousness and trepidation. Will we hit it off? Will we still like each other when we’re sitting at a table together? Who is this person, anyway?

The anxiety that virtual friends feel when they’re about to meet in person is telling. It reveals the fragility, the sparseness, of disembodied relationships. It makes plain that we don’t feel we really know another person until we’ve met him or her in the flesh. Screen presence leaves a lot of room for fantasizing, for projecting the self into the other; physical presence is more solid, more filled in — and, yes, more real. “Some kinds of love are mistaken for vision,” Reed sang in that same song. And some visions are mistaken for love.

Photo by John.

Guided by satellites

I will be in Cambridge, Mass., this afternoon to give a talk entitled “The World Is Not the Screen: How Computers Shape Our Sense of Place.” It is part of the ongoing Navigation Lecture Series presented by Harvard’s Radcliffe  Institute for Advanced Study. The talk starts at five, and is free and open to the public. So if you’re in the neighborhood, please come by. Details are here.

Just press send

blindfolded

We’ve been getting a little lesson in what human-factors boffins call “automation complacency” over the last couple of days. Google apparently made some change to the autosuggest algorithm in Gmail over the weekend, and the program started inserting unusual email addresses into the “To” field of messages. As Business Insider explained, “Instead of auto-completing to the most-used contact when people start typing a name into the ‘To’ field, it seems to be prioritizing contacts that they communicate with less frequently.”

Google quickly acknowledged the problem:

The glitch led to a flood of misdirected messages, as people pressed Send without bothering to check the computer’s work. “I got a bunch of emails yesterday that were clearly not meant for me,” blogged venture capitalist Fred Wilson on Monday. Gmail users flocked to Twitter to confess to shooting messages to the wrong people. “My mum just got my VP biz dev’s expense report,” tweeted Pingup CEO Mark Slater. “She was not happy.” Wrote CloudFlare founder Matthew Prince, “It’s become pathological.”

The bug may lie in the machine, but the pathology actually lies in the user. Automation complacency happens all the time when computers take over tasks from people. System operators place so much trust in the software that they start to zone out. They assume that the computer will perform flawlessly in all circumstances. When the computer fails or makes a mistake, the error goes unnoticed and uncorrected — until too late.

Researchers Raja Parasuraman and Dietrich Manzey described the phenomenon in a 2010 article in Human Factors:

Automation complacency — operationally defined as poorer detection of system malfunctions under automation compared with under manual control — is typically found under conditions of multiple-task load, when manual tasks compete with the automated task for the operator’s attention. … Experience and practice do not appear to mitigate automation complacency: Skilled pilots and controllers exhibit the effect, and additional task practice in naive operators does not eliminate complacency. It is possible that specific experience in automation failures may reduce the extent of the effect. Automation complacency can be understood in terms of an attention allocation strategy whereby the operator’s manual tasks are attended to at the expense of the automated task, a strategy that may be driven by initial high trust in the automation.

In the worst cases, automation complacency can result in planes crashing on runways, school buses smashing into overpasses, or cruise ships running aground on sandbars. Sending an email to your mom instead of a colleague seems pretty trivial by comparison. But it’s a symptom of the same ailment, an ailment that we’ll be seeing a lot more of as we rush to hand ever more jobs and chores over to computers.

Brains, real and metaphorical

magicbrain

A few highlights from Lee Gomes’s long, lucid interview with Facebook’s artificial-intelligence chief Yann LeCun in IEEE Spectrum:

Gomes: We read about Deep Learning in the news a lot these days. What’s your least favorite definition of the term that you see in these stories?

LeCun: My least favorite description is, “It works just like the brain.” I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does. And describing it like the brain gives a bit of the aura of magic to it, which is dangerous. It leads to hype; people claim things that are not true. AI has gone through a number of AI winters because people claimed things they couldn’t deliver.

Gomes: You seem to take pains to distance your work from neuroscience and biology. For example, you talk about “convolutional nets,” and not “convolutional neural nets.” And you talk about “units” in your algorithms, and not “neurons.”

LeCun: That’s true. Some aspects of our models are inspired by neuroscience, but many components are not at all inspired by neuroscience, and instead come from theory, intuition, or empirical exploration. Our models do not aspire to be models of the brain, and we don’t make claims of neural relevance.

Gomes: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?

LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.

Gomes: You yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.

LeCun: Not anytime soon.