Monthly Archives: February 2015

Just press send

blindfolded

We’ve been getting a little lesson in what human-factors boffins call “automation complacency” over the last couple of days. Google apparently made some change to the autosuggest algorithm in Gmail over the weekend, and the program started inserting unusual email addresses into the “To” field of messages. As Business Insider explained, “Instead of auto-completing to the most-used contact when people start typing a name into the ‘To’ field, it seems to be prioritizing contacts that they communicate with less frequently.”

Google quickly acknowledged the problem:

The glitch led to a flood of misdirected messages, as people pressed Send without bothering to check the computer’s work. “I got a bunch of emails yesterday that were clearly not meant for me,” blogged venture capitalist Fred Wilson on Monday. Gmail users flocked to Twitter to confess to shooting messages to the wrong people. “My mum just got my VP biz dev’s expense report,” tweeted Pingup CEO Mark Slater. “She was not happy.” Wrote CloudFlare founder Matthew Prince, “It’s become pathological.”

The bug may lie in the machine, but the pathology actually lies in the user. Automation complacency happens all the time when computers take over tasks from people. System operators place so much trust in the software that they start to zone out. They assume that the computer will perform flawlessly in all circumstances. When the computer fails or makes a mistake, the error goes unnoticed and uncorrected — until too late.

Researchers Raja Parasuraman and Dietrich Manzey described the phenomenon in a 2010 article in Human Factors:

Automation complacency — operationally defined as poorer detection of system malfunctions under automation compared with under manual control — is typically found under conditions of multiple-task load, when manual tasks compete with the automated task for the operator’s attention. … Experience and practice do not appear to mitigate automation complacency: Skilled pilots and controllers exhibit the effect, and additional task practice in naive operators does not eliminate complacency. It is possible that specific experience in automation failures may reduce the extent of the effect. Automation complacency can be understood in terms of an attention allocation strategy whereby the operator’s manual tasks are attended to at the expense of the automated task, a strategy that may be driven by initial high trust in the automation.

In the worst cases, automation complacency can result in planes crashing on runways, school buses smashing into overpasses, or cruise ships running aground on sandbars. Sending an email to your mom instead of a colleague seems pretty trivial by comparison. But it’s a symptom of the same ailment, an ailment that we’ll be seeing a lot more of as we rush to hand ever more jobs and chores over to computers.

Brains, real and metaphorical

magicbrain

A few highlights from Lee Gomes’s long, lucid interview with Facebook’s artificial-intelligence chief Yann LeCun in IEEE Spectrum:

Gomes: We read about Deep Learning in the news a lot these days. What’s your least favorite definition of the term that you see in these stories?

LeCun: My least favorite description is, “It works just like the brain.” I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does. And describing it like the brain gives a bit of the aura of magic to it, which is dangerous. It leads to hype; people claim things that are not true. AI has gone through a number of AI winters because people claimed things they couldn’t deliver.

Gomes: You seem to take pains to distance your work from neuroscience and biology. For example, you talk about “convolutional nets,” and not “convolutional neural nets.” And you talk about “units” in your algorithms, and not “neurons.”

LeCun: That’s true. Some aspects of our models are inspired by neuroscience, but many components are not at all inspired by neuroscience, and instead come from theory, intuition, or empirical exploration. Our models do not aspire to be models of the brain, and we don’t make claims of neural relevance.

Gomes: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?

LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.

Gomes: You yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.

LeCun: Not anytime soon.

Peak code?

Hackathon

“Will human replacement — the production by ourselves of ever better substitutes for ourselves — deliver an economic utopia with smart machines satisfying our every material need? Or will our self-induced redundancy leave us earning too little to purchase the products our smart machines can make?” So ask three Boston University economists, Seth Benzell, Laurence Kotlikoff, and Guillermo LaGarda, and Columbia’s Jeffrey Sachs. In an attempt to answer the question, the researchers turned to — what else? — a computer. They programmed a “bare-bones” model of the economy, featuring high-tech workers (who produce software) and low-tech workers (who produce services), and let the simulation run under different sets of variables.

The results were, as the economists put it in a new paper on the experiment, “disturbing.” The simulation suggests that “technological progress can be immiserating” and that even talented software programmers may face tough times in an ever more automated economy. The reason lies in the durability and reusability of software. Code is not used up; it accumulates. As the cost of deploying software for productive work (ie, the cost of automation) goes down, demand for new code spikes, bringing lots of new programmers into the labor market. The generous compensation provided to the programmers leads at first to higher savings and capital formation, fueling the boom. But “over time,” the model reveals, “as the stock of legacy code grows, the demand for new code, and thus for high-tech workers, falls.”

As a simple illustration, the authors point to the development of a robotic chess player. Once you have a robot that can outperform all human players, the incentive for programming new robotic players drops sharply. This is something we’ve already seen, as the authors point out: “Take Junior – the reigning World Computer Chess Champion. Junior can beat every current and, possibly, every future human on the planet. Consequently, his old code has largely put new chess programmers out of business.” Once any program reaches a superhuman level of productivity in a task, the incentive to invest in further, marginal gains falls.

The authors lay out the resulting economic dynamic:

The increase in [the code retention rate] initially raises the compensation of code-writing high-tech workers. This draws more high-tech workers into code-writing, thereby raising high-tech worker compensation … Things change over time. As more durable code comes on line, the marginal productivity of code falls, making new code writers increasingly redundant. Eventually the demand for code-writing high-tech workers is limited to those needed to cover the depreciation of legacy code, i.e., to retain, maintain, and update legacy code. The remaining high-tech workers find themselves working in the service sector [and pushing down wages in those occupations]. The upshot is that high-tech workers can end up potentially earning far less than in the [model’s] initial steady state.

As usable code stocks swell, the model indicates, we will at some point pass the cycle’s point of peak code — the moment of maximum demand for new code — and the prospects for employment in programming will begin to decline. Code boom will turn to code bust. (The bust will be even deeper, the economists found, if software is distributed as open source and hence made easier to share.) Even though high-tech workers “start out earning far more than low-tech workers,” they “end up earning far less.”

One thing the economists don’t seem to account for is the automation of programming itself, particularly the use of software to perform many of the tasks necessary to maintain, update, and redeploy legacy code. The automation of coding, which would be encouraged as programmers’ wages increase during the boom period, would likely deepen the bust even further.

Computer models of complex systems are always simplifications, of course, but this study serves to raise important and complicated questions about the long-run demand for programmers. It’s become popular to suggest that all kids should be taught to code as part of their education. That way, the theory goes, they’ll be assured of good jobs in an ever more computerized economy. This study calls into question that hopeful assumption. There can be a glut of coders just as there can be a glut of code.

Image of hackathon: Wikipedia.

@Gilligan #Franzen #Facebook #TV

tvsets

From Susan Lerner’s interview with Jonathan Franzen in Booth:

SL: I want to ask you about technology and social media. … I was wondering, given your change of heart about television and its place within our culture, can you comment on this conversion and the possibility that social media might also one day redeem itself?

JF: TV redeemed itself by becoming more like the novel, which is to say: interested in sustained, morally complex narrative that is compelling and enjoyable. How that happens with pictures of you and your friends at T. G. I. Friday’s isn’t clear to me. Twitter isn’t even trying to be a narrative form. Its structure is antithetical to sustained and carefully considered story-telling. How does a structure like that suddenly turn itself into narrative art? You could say, well, Gilligan’s Island wasn’t art, either. But Gilligan’s Island paved the way, by being twenty-two minutes of a narrative, however dumb, to the twenty-two minutes of Nurse Jackie. 

SL: You see a trajectory?

JF: Yes, you can see the trajectory there. Which is the same trajectory that the novel itself followed. There was a lot of really bad experimentation in the seventeenth century as we were trying to work out these fundamental problems of “Is this narrative pretending to be true? Is it acknowledging that it’s not true? Are novels only about fantastical things? Where does everyday life fit in?” There were a couple of centuries of sorting that out before the novel really got going in Richardson and Fielding, and then, soon after, culminating in Austen. You can see that maturation in movies as well. You had Birth of a Nation before you had The Rules of the Game. It takes a while for artistic media to mature—I take that point—but I don’t know anyone who thinks that social media is an artistic medium. It’s more like another phone, home movies, email, whatever. It’s like a better version of the way people socially interacted in the past, a more technologically advanced version. But if you use your Facebook page to publish chapters of a novel, what you get is a novel, not Facebook. It’s a struggle to imagine what value is added by the technology itself.

SL: I think there’s an argument that can be made about new technology providing different forms and twists on established ideas, so people can examine—

JF: I’m just looking at the phenomenology of this technology in everyday life.

SL: Pictures of desserts.

JF: Yeah, pictures of desserts and the fact that you can’t sit still for five minutes without sending and receiving texts. I mean, it does not look like any form of engagement with art that I recognize from any field. It looks like a distraction and an addiction and a tool. A useful tool. I’m not a technophobe. I’m on the internet all day, every day, except when I’m actually trying to write, and even then I’m on a computer and using, often, material that I’ve taken from the internet. It’s not that I have technophobia. It’s the notion that somehow this is a transformative, liberating thing that I take issue with, when it seems to me more like a perfection of the free market’s infiltration of every aspect of a human being’s waking life.

It’s interesting — this is an aside — how deeply Gilligan’s Island managed to engrave itself into the cultural worldview of a certain generation of Americans. Despite its surface dumbness, the show, I would suggest, carries a mythical weight, what with the totemic quality of the characters — scientist, celebrity, tycoon, seafarer, etc. — and the Promethean nature of the plot.

O, unscepter’d isle, demi-paradise, demi-hell!

Where will driverless cars drive us?

robocars

I have an article in Fortune that looks at the hype surrounding autonomous cars. Here’s a bit from the piece highlighting recent research that calls into question some of the common assumptions about robotic vehicles:

At a car conference last September, Steven Shladover, a research engineer at the University of California at Berkeley, explained that automotive automation presents far more daunting challenges than aircraft automation. Cars travel much closer together than planes do, they have less room to maneuver in emergencies, and drivers have to deal with a welter of earthly obstacles, from jaywalkers to work crews to potholes. Developing a driverless car, Shladover said, will be orders of magnitude harder than developing a pilotless airliner. It’s going to be a long time, he cautioned, before we’ll be able to curl up in the back seat while a robot drives us to work.

Even if perfect automation remains beyond our reach, progress in automotive robotics will sprint forward. Top-end luxury cars are already highly automated, able to center themselves in a lane and adjust their speed to fit traffic conditions, and computers are set to take over many more driving tasks in the years ahead. As always, though, the road to the future will have many twists and forks. The choices that companies and designers make in automating cars will influence not only how we drive but how we live. As we learned in the last century, advances in personal-transportation technologies can have profound consequences for everything from housing to urban planning to energy policy.

Consider safety. It’s often assumed that automation will reduce traffic accidents, if not eliminate them entirely. But that’s not necessarily the case. Research into human-computer interaction reveals that partial automation can actually make complex tasks like driving more dangerous. People relying on automation quickly become complacent, trusting the computer to perform flawlessly, and that raises the odds that they’ll make mistakes when they have to reengage with the work, particularly in an emergency. A study of drivers by U.K. scholars Neville Stanton and Mark Young found that while shifting routine driving chores to computers can reduce workload and stress, it also “lulls drivers into a false sense of security.” They lose “situational awareness,” which can have tragic consequences when split-second reactions are required to avoid an accident.

The risks will likely be magnified during the long transitional period when automobiles with varying degrees of automation share the road. Given that the average American passenger vehicle is more than 11 years old, there will be “at least a several-decade-long period during which conventional and self-driving vehicles would need to interact,” report Michael Sivak and Brandon Schoettle of the University of Michigan’s Transportation Research Institute. That becomes particularly problematic when you take into account driving’s complex social psychology. When we change lanes, enter traffic, or execute other tricky maneuvers, we tend to make quick, intuitive decisions based on our experience of how other drivers act. But all those deeply learned assumptions may no longer apply when the other driver is a robot. Just the loss of eye contact between human drivers, Sivak and Schoettle warn, could introduce new and unexpected risks, particularly for drivers of older, less automated cars.

Beyond the knotty technical questions are equally complicated social ones. Peter Norton, a transportation expert at the University of Virginia, points out that the way autonomous vehicles are designed will have a profound influence on people’s driving habits. If automation makes driving and parking easier, and in particular if it allows commuters to do other things while in their cars, it could end up encouraging people to drive more often or to commute over longer distances. Cities and suburbs would become even more congested, highway infrastructure would come under more stress, and investments in public transport might wither further. “If we rebuild the landscape for autonomous vehicles,” Norton writes, “we may make it unsuitable for anything else — including walking.”

On the other hand, if we design autonomous vehicles as part of a thoughtful overhaul of the nation’s transit systems, the new cars could play a part in reducing traffic, curtailing air pollution, and engendering more livable cities. It’s a mistake, Norton argues, to view autonomous cars in isolation, and it’s an even bigger mistake to assume that automotive automation will be a panacea for complex problems like traffic and safety. “Before we make autonomous cars the solution,” he says, “we must formulate the problem correctly.”

Image: US Department of Transportation.

The medium is the morality

prompt

In 1870, W. A. Rogers, a British bureaucrat in the Bombay Civil Service, wrote of the fortifying effect that modern transport systems were having on the character of the local populace:

Railways are opening the eyes of the people who are within reach of them in a variety of ways. They teach them that time is worth money, and induce them to economise that which they had been in the habit of slighting and wasting; they teach them that speed attained is time, and therefore money, saved or made. . . . Above all, they induce in them habits of self-dependence, causing them to act for themselves promptly and not lean on others.

The locomotive was a moral engine as well as a mechanical one. It carried people horizontally, across the land, but also vertically, up the ladder of enlightenment. As Russell Hittinger notes:

What is most striking about [Rogers’s] statement is that the machine is regarded as the proximate cause of the liberal virtues; habits of self-dependence are the effect of the application of a technology. The benighted peoples of the sub-continent are to be civilized, not by reading Cicero, not by conversion to the Church of England, not even by adopting the liberal faith, but by receiving the discipline of trains and clocks. The machine is both the exemplar and the proximate cause of individual and cultural perfection.

Tools are, whether by design or by accident, imbued with a certain moral character — they instruct us in how to act — and that in-built, artificial morality offers a readymade substitute for our own. The technology becomes an ethical guide. We embrace its morality as our own. An earlier and more dramatic example of such ethical transference came, as Hittinger suggests, in the form of the mechanical clock. Before the arrival of the time-keeping machine, life was largely “free of haste, careless of exactitude, unconcerned by productivity,” the historian Jacques Le Goff has written. With every tick, the new clock in the town square issued an indictment of such idleness and imprecision. It taught people that time was as measurable as money, something precious that could be wasted or lost. The clock became, to quote David Landes, “prod and key to personal achievement and productivity.”

Just as the clock and the railroad gave our forebears lessons in, and indeed models for, industriousness, thrift, and punctuality, so the computer today offers us its own character instruction. Its technical features are taken for ethical traits. Consider how the protocols of networking, the arcane codes that allow computers to exchange data and share resources, have become imbued with moral weight. The computer fulfills its potential, becomes a whole being, so to speak, only when it is connected to and actively communicating with other computers. An isolated computer is as bad as an idle computer. And the same goes for people. The sense of the computer network as a model for a moral society runs, with different emphases, through the work of such prominent and diverse thinkers as Yochai Benkler, David Weinberger, Clay Shirky, Steven Johnson, and Kevin Kelly. We, too, become whole beings only when we are connected. And if being connected is the ideal, then being disconnected becomes morally suspect. The loner, the recluse, the outsider, the solitary thinker, the romantic quester: all such individuals carry an ethical stain today that goes beyond mere unsociability — they are letting the rest of us down by not sharing, by not connecting. To be inaccessible to the network is to waste one’s social capital, a deadly sin.

But the computer goes even further than mechanical tools and systems in shaping our conception of virtue. It provides more than just a model. It offers us a means for “outsourcing” our ethical sense, as Evan Selinger and Thomas Seager put it. With the personal computer, we have an intimate machine, a technological companion and guru, that can automate the making of moral choices, that through its programming can prod us, nudge us, and otherwise lead us down the righteous path. Arianna Huffington celebrates the potential of the smartphone to provide a “GPS for the soul,” offering ethical “course corrections” as we go through the day.

In discussing the automation of moral choice, Hittinger draws a connection with the work of the historian Christopher Dawson, who in a 1960 lecture argued that modern technology, and the social order it both represents and underpins, has become “the real basis of secular culture”:

Modern technologies are not only “labor saving” devices. A labor saving device, like an automated farm implement or a piston, replaces repetitive human acts. But most distinctive of contemporary technology is the replacement of the human act; or, of what the scholastic philosophers called the actus humanus. The machine reorganizes and to some extent supplants the world of human action, in the moral sense of the term. … It is important to understand that Dawson’s criticism of technology is not aimed at the tool per se. His criticism has nothing to do with the older, and in our context, misleading notion of “labor saving” devices. Rather, it is aimed at a new cultural pattern in which tools are either deliberately designed to replace the human act, or at least have the unintended effect of making the human act unnecessary or subordinate to the machine.

Philosophy professor Joshua Hochschild goes further: “Automation makes us forget that we are moral agents.” When software code becomes moral code, moral code becomes meaningless.