Our algorithms, ourselves

windup

An earlier version of this essay appeared last year, under the headline “The Manipulators,” in the Los Angeles Review of Books.

Since the launch of Netscape and Yahoo twenty years ago, the story of the internet has been one of new companies and new products, a story shaped largely by the interests of entrepreneurs and venture capitalists. The plot has been linear; the pace, relentless. In 1995 came Amazon and Craigslist; in 1997, Google and Netflix; in 1999, Napster and Blogger; in 2001, iTunes; in 2003, MySpace; in 2004, Facebook; in 2005, YouTube; in 2006, Twitter; in 2007, the iPhone and the Kindle; in 2008, Airbnb; in 2010, Instagram and Uber; in 2011, Snapchat; in 2012, Coursera; in 2013, Tinder. It has been a carnival ride, and we, the public, have been the giddy passengers.

The story may be changing now. Though the current remains swift, eddies are appearing in the stream. Last year, the big news about the net came not in the form of buzzy startups or cool gadgets, but in the shape of two dry, arcane documents. One was a scientific paper describing an experiment in which researchers attempted to alter the moods of Facebook users by secretly manipulating the messages they saw. The other was a ruling by the European Union’s highest court granting citizens the right to have outdated or inaccurate information about them erased from Google and other search engines. Both documents provoked consternation, anger, and argument. Both raised important, complicated issues without resolving them. Arriving in the wake of Edward Snowden’s revelations about the NSA’s online spying operation, both seemed to herald, in very different ways, a new stage in the net’s history — one in which the public will be called upon to guide the technology, rather than the other way around. We may look back on 2014 as the year the internet began to grow up.

* * *

The Facebook study seemed fated to stir up controversy. Its title read like a bulletin from a dystopian future: Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks. But when, on June 2, 2014, the article first appeared on the website of the Proceedings of the National Academy of Sciences (PNAS), it drew little notice or comment. It sank quietly into the vast swamp of academic publishing. That changed abruptly three weeks later, on June 26, when technology reporter Aviva Rutkin posted a brief account of the study on the website of New Scientist magazine. She noted that the research had been run by a Facebook employee, a social psychologist named Adam Kramer who worked in the firm’s large Data Science unit, and that it had involved more than half a million members of the social network. Smelling a scandal, other journalists rushed to the PNAS site to give the paper a read. They discovered that Facebook had not bothered to inform its members about their participation in the experiment, much less ask their consent.

Outrage ensued, as the story pinballed through the media. “If you were still unsure how much contempt Facebook has for its users,” declared the technology news site PandoDaily, “this will make everything hideously clear.” A New York Times writer accused Facebook of treating people like “lab rats,” while The Washington Post, in an editorial, criticized the study for “cross[ing] an ethical line.” US Senator Mark Warner called on the Federal Trade Commission to investigate the matter, and at least two European governments opened probes. The response from social media was furious. “Get off Facebook,” tweeted Erin Kissane, an editor at a software site. “If you work there, quit. They’re fucking awful.” Writing on Google Plus, the privacy activist Lauren Weinstein wondered whether Facebook “KILLED anyone with their emotion manipulation stunt.”

The ethical concerns were justified. Although Facebook, as a private company, is not bound by the informed-consent guidelines of universities and government agencies, its decision to carry out psychological research on people without telling them was at best rash and at worst reprehensible. It violated the US Department of Health & Human Services’ policy for the protection of human research subjects (known as the “Common Rule”) as well as the ethics code of the American Psychological Association. Making the transgression all the more inexcusable was the company’s failure to exclude minors from the test group. The fact that the manipulation of information was carried out by the researchers’ computers rather than by the researchers themselves — a detail that Facebook offered in its defense — was beside the point. As University of Maryland law professor James Grimmelmann observed, psychological manipulation remains psychological manipulation “even when it’s carried out automatically.”

Still, the intensity of the reaction seemed incommensurate with its object. Once you got past the dubious ethics and the alarming title, the study turned out to be a meager piece of work. Earlier psychological research had suggested that moods, like sneezes, could be contagious. If you hang out with sad people, you’ll probably end up feeling a little blue yourself. Kramer and his collaborators (the paper was coauthored by two Cornell scientists) wanted to see if such emotional contagion might also be spread through online social networks. During a week in January 2012, they programmed Facebook’s News Feed algorithm — the program that selects which messages to funnel onto a member’s home page and which to omit — to make slight adjustments in the “emotional content” of the feeds delivered to a random sample of members. One group of test subjects saw a slightly higher number of “positive” messages than normal, while another group saw slightly more “negative” messages. To categorize messages as positive or negative, the researchers used a standard text-analysis program, called Linguistic Inquiry and Word Count, that spots words expressing emotions in written works. They then evaluated each subject’s subsequent Facebook posts to see whether the emotional content of the messages had been influenced by the alterations in the News Feed.

The researchers did discover an influence. People exposed to more negative words went on to use more negative words than would have been expected, while people exposed to more positive words used more of the same — but the effect was vanishingly small, measurable only in a tiny fraction of a percentage point. If the effect had been any more trifling, it would have been undetectable. As Kramer later explained, in a contrite Facebook post, “the actual impact on people in the experiment was the minimal amount to statistically detect it — the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.” As contagions go, that’s a pretty feeble one. It seems unlikely that any participant in the study suffered the slightest bit of harm. As Kramer admitted, “the research benefits of the paper may not have justified all of this anxiety.”

* * *

What was most worrisome about the study lay not in its design or its findings, but in its ordinariness. As Facebook made clear in its official responses to the controversy, Kramer’s experiment was just the visible tip of an enormous and otherwise well-concealed iceberg. In an email to the press, a company spokesperson said the PNAS study was part of the continuing research Facebook does to understand “how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow.” Sheryl Sandberg, the company’s chief operating officer, reinforced that message in a press conference: “This was part of ongoing research companies do to test different products, and that was what it was.” The only problem with the study, she went on, was that it “was poorly communicated.” A former member of Facebook’s Data Science unit, Andrew Ledvina, told The Wall Street Journal that the in-house lab operates with few restrictions. “Anyone on that team could run a test,” he said. “They’re always trying to alter people’s behavior.”

Businesses have been trying to alter people’s behavior for as long as businesses have been around. Marketing departments and advertising agencies are experts at formulating, testing, and disseminating images and words that provoke emotional responses, shape attitudes, and trigger purchases. From the apple-cheeked Ivory Snow baby to the chiseled Marlboro man to the moon-eyed Cialis couple, we have for decades been bombarded by messages intended to influence our feelings. The Facebook study is part of that venerable tradition, a fact that the few intrepid folks who came forward to defend the experiment often emphasized. “We are being manipulated without our knowledge or consent all the time — by advertisers, marketers, politicians — and we all just accept that as a part of life,” argued Duncan Watts, a researcher who studies online behavior for Microsoft. “Marketing as a whole is designed to manipulate emotions,” said Nicholas Christakis, a Yale sociologist who has used Facebook data in his own research.

The “everybody does it” excuse is rarely convincing, and in this case it’s specious. Thanks to the reach of the internet, the kind of psychological and behavioral testing that Facebook does is different in both scale and kind from the market research of the past. Never before have companies been able to gather such intimate data on people’s thoughts and lives, and never before have they been able to so broadly and minutely shape the information that people see. If the Post Office had ever disclosed that it was reading everyone’s mail and choosing which letters to deliver and which not to, people would have been apoplectic, yet that is essentially what Facebook has been doing. In formulating the algorithms that run its News Feed and other media services, it molds what its billion-plus members see and then tracks their responses. It uses the resulting data to further adjust its algorithms, and the cycle of experiments begins anew. Because the algorithms are secret, people have no idea which of their buttons are being pushed — or when, or why.

Facebook is hardly unique. Pretty much every internet company performs extensive experiments on its users, trying to figure out, among other things, how to increase the time they spend using an app or a site, or how to increase the likelihood they will click on an advertisement or a link. Much of this research is innocuous. Google once tested 41 different shades of blue on a web-page toolbar to determine which color would produce the most clicks. But not all of it is innocuous. You don’t have to be paranoid to conclude that the PNAS test was far from the most manipulative of the experiments going on behind the scenes at internet companies. You only have to be sensible.

That became clear, in the midst of the Facebook controversy, when another popular web operation, the matchmaking site OKCupid, disclosed that it routinely conducts psychological research in which it doctors the information it provides to its love-seeking clientele. It has, for instance, done experiments in which it altered people’s profile pictures and descriptions. It has even circulated false “compatibility ratings” to see what happens when ill-matched strangers believe they’ll be well-matched couples. OKCupid was not exactly contrite about abusing its customers’ trust. “Guess what, everybody,” blogged the company’s cofounder, Christian Rudder: “if you use the internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”

The problem with manipulation is that it hems us in. It weakens our volition and circumscribes our will, substituting the intentions of others for our own. When efforts to manipulate us are hidden from us, the likelihood that we’ll fall victim to them grows. Other than the dim or gullible, most people in the past understood that corporate marketing tactics, from advertisements to celebrity endorsements to package designs, were intended to be manipulative. As long as those tactics were visible, we could evaluate them and resist them — maybe even make jokes about them. That’s no longer the case, at least not when it comes to online services. When companies wield moment-by-moment control over the flow of personal correspondence and other intimate or sensitive information, tweaking it in ways that are concealed from us, we’re unable to discern, much less evaluate, the manipulative acts. We find ourselves inside a black box.

* * *

Put yourself in the shoes of Mario Costeja González. In 1998, the Spaniard ran into a little financial difficulty. He had defaulted on a debt, and to pay it off he was forced to put some real estate up for auction. The sale was duly noted in the venerable Barcelona newspaper La Vanguardia. The matter settled, Costeja González went on with his life as a graphologist, an interpreter of handwriting. The debt and the auction, as well as the 36-word press notice about them, faded from public memory. The bruise healed.

But then, in 2009, nearly a dozen years later, the episode sprang back to life. La Vanguardia put its archives online, Google’s web-crawling “bot” sniffed out the old article about the auction, the article was automatically added to the search engine’s database, and a link to it began popping into prominent view whenever someone in Spain did a search on Costeja’s name. Costeja was dismayed. It seemed unfair to have his reputation sullied by an out-of-context report on an old personal problem that had long ago been resolved. Presented without explanation in search results, the article made him look like a deadbeat. He felt, as he would later explain, that his dignity was at stake.

Costeja lodged a formal complaint with the Spanish government’s data-protection agency. He asked the regulators to order La Vanguardia to remove the article from its website and to order Google to stop linking to the notice in its search results. The agency refused to act on the newspaper request, citing the legality of the article’s original publication, but it agreed with Costeja about the unfairness of the Google listing. It told the company to remove the auction story from its results. Appalled, Google appealed the decision, arguing that in listing the story it was merely highlighting information published elsewhere. The dispute quickly made its way to the Court of Justice of the European Union in Luxembourg, where it became known as the “right to be forgotten” case. On May 13 of 2014, the high court issued its decision. Siding with Costeja and the Spanish data-protection agency, the justices ruled that Google was obligated to obey the order and remove the La Vanguardia piece from its search results. The upshot: European citizens suddenly had the right to get certain unflattering information about them deleted from search engines.

Most Americans, and quite a few Europeans, were flabbergasted by the decision. They saw it not only as unworkable (how can a global search engine processing some six billion searches a day be expected to evaluate the personal grouses of individuals?), but also as a threat to the free flow of information online. Many accused the court of licensing censorship or even of creating “memory holes” in history.

But the heated reactions, however understandable, were off the mark. They reflected a misinterpretation of the decision. The court had not established a “right to be forgotten.” That essentially metaphorical phrase is mentioned only in passing in the ruling, and its attachment to the case has proven a distraction. In an open society, where freedom of thought and speech are protected, where people’s thoughts and words are their own, a right to be forgotten is as untenable as a right to be remembered. What the case was really about was an individual’s right not to be systematically misrepresented. But even putting the decision into those more modest terms is misleading. It implies that the court’s ruling was broader than it actually was.

The essential issue the justices were called upon to address was how, if at all, a 1995 European Union policy on the processing of personal data, the so-called Data Protection Directive, applied to companies that, like Google, engage in the large-scale aggregation of information online. The directive had been enacted to ease the cross-border exchange of data, while also establishing privacy and other protections for citizens. “Whereas data-processing systems are designed to serve man,” the policy reads, “they must, whatever the nationality or residence of natural persons, respect their fundamental rights and freedoms, notably the right to privacy, and contribute to economic and social progress, trade expansion and the well-being of individuals.” To shield people from abusive or unjust treatment, the directive imposed strict regulations on businesses and other organizations that act as “controllers” of the processing of personal information. It required, among other things, that any data disseminated by such controllers be not only accurate and up-to-date, but fair, relevant, and “not excessive in relation to the purposes for which they are collected and/or further processed.” What the directive left unclear was whether companies that aggregated information produced by others — companies like Google and Facebook — fell into the category of controllers. That was what the court had to decide.

Search engines, social networks, and other online aggregators have always presented themselves as playing a neutral and essentially passive role when it comes to the processing of information. They’re not creating the content they distribute — that’s done by publishers in the case of search engines, or by individual members in the case of social networks. Rather, they’re simply gathering the information and arranging it in a useful form. This view, tirelessly promoted by Google — and used by the company as a defense in the Costeja case — has been embraced by much of the public. It has become the default view. When Wikipedia cofounder Jimmy Wales, in criticizing the European court’s decision, said, “Google just helps us to find the things that are online,” he was not only mouthing the company line; he was expressing the popular conception of information aggregators.

The court took a different view. Online aggregation is not a neutral act, it ruled, but a transformative one. In collecting, organizing, and ranking information, a search engine is creating something new: a distinctive and influential product that reflects the company’s own editorial intentions and judgments, as expressed through its information-processing algorithms. “The processing of personal data carried out in the context of the activity of a search engine can be distinguished from and is additional to that carried out by publishers of websites,” the justices wrote. “Inasmuch as the activity of a search engine is therefore liable to affect significantly […] the fundamental rights to privacy and to the protection of personal data, the operator of the search engine as the person determining the purposes and means of that activity must ensure, within the framework of its responsibilities, powers and capabilities, that the activity meets the requirements of [the Data Protection Directive] in order that the guarantees laid down by the directive may have full effect.”

The European court did not pass judgment on the guarantees established by the Data Protection Directive, nor on any other existing or prospective laws or policies pertaining to the processing of personal information. It did not tell society how to assess or regulate the activities of aggregators like Google or Facebook. It did not even offer an opinion as to the process companies or lawmakers should use in deciding which personal information warranted exclusion from search results — an undertaking every bit as thorny as it’s been made out to be. What the justices did, with perspicuity and prudence, was provide us with a way to think rationally about the algorithmic manipulation of digital information and the social responsibilities it entails. The interests of a powerful international company like Google, a company that provides an indispensable service to many people, do not automatically trump the interests of a lone individual. When it comes to the operation of search engines and other information aggregators, fairness is at least as important as expedience.

Ten months have passed since the court’s ruling, and we now know that the judgment is not going to “break the internet,” as was widely predicted when it was issued. The web still works. Google has a process in place for adjudicating requests for the removal of personal information — it accepts about forty percent of them — just as it has a process in place for adjudicating requests to remove copyrighted information. Last month, Google’s Advisory Council on the Right to Be Forgotten issued a report that put the ruling and the company’s response into context. “In fact,” the council wrote, “the Ruling does not establish a general Right to to Be Forgotten. Implementation of the Ruling does not have the effect of ‘forgetting’ information about a data subject. Instead, it requires Google to remove links returned in search results based on an individual’s name when those results are ‘inadequate, irrelevant or no longer relevant, or excessive.’ Google is not required to remove those results if there is an overriding public interest in them ‘for particular reasons, such as the role played by the data subject in public life.'” It is possible, in other words, to strike a reasonable balance between an individual’s interests, the interests of the public in finding information quickly, and the commercial interests of internet companies.

* * *

We have had a hard time thinking clearly about companies like Google and Facebook because we have never before had to deal with companies like Google and Facebook. They are something new in the world, and they don’t fit neatly into our existing legal and cultural templates. Because they operate at such unimaginable magnitude, carrying out millions of informational transactions every second, we’ve tended to think of them as vast, faceless, dispassionate computers — as information-processing machines that exist outside the realm of human intention and control. That’s a misperception, and a dangerous one.

Modern computers and computer networks enable human judgment to be automated, to be exercised on a vast scale and at a breathtaking pace. But it’s still human judgment. Algorithms are constructed by people, and they reflect the interests, biases, and flaws of their makers. As Google’s founders themselves pointed out many years ago, an information aggregator operated for commercial gain will inevitably be compromised and should always be treated with suspicion. That is certainly true of a search engine that mediates our intellectual explorations; it is even more true of a social network that mediates our personal associations and conversations.

Because algorithms impose on us the interests and biases of others, we have a right and an obligation to carefully examine and, when appropriate, judiciously regulate those algorithms. We have a right and an obligation to understand how we, and our information, are being manipulated. To ignore that responsibility, or to shirk it because it raises hard problems, is to grant a small group of people — the kind of people who carried out the Facebook and OKCupid experiments — the power to play with us at their whim.

Image: Emily Hummel.

Comments Off on Our algorithms, ourselves

Filed under Uncategorized

Varieties of friendship

mailboxes

I contributed to the latest New York Times “Room for Debate” discussion, which posed this question: “Can real relationships be forged between people who never meet? Do online-only friendships count?” Here’s my reply, slightly expanded from what appeared in the Times:

“No kinds of love,” sang Lou Reed in his Velvet Underground days, “are better than others.” There’s wisdom as well as kindness in that line. Only the mean of spirit would seek to redline certain varieties of love or friendship — to claim that some human relationships “don’t count.” I have happy memories of exchanging letters with distant pen pals while in elementary school, and I recall with fondness the conversations I had with like-minded cyberians in America Online chatrooms in the early nineties. Life is lonely; all connections have value.

That doesn’t mean that all connections are the same. If it’s odious to dismiss online friendships as invalid, it’s naive to pretend that there are no distinctions in quality between friendships forged in person and those conducted from afar. An epistolary friendship is different from a telephonic friendship, and an email friendship is different from a Facebook friendship. And all of those mediated, or disembodied, friendships are different from embodied friendships, the ones established between persons who are in close enough proximity to actually touch each other.

The differences between virtual and embodied friendships come clearly into view at moments of transition, when an embodied friendship becomes a virtual one or vice versa. People who have built a friendship in person have little trouble continuing the friendship online when they’re separated. The friendship may eventually peter out — absence doesn’t always make the heart grow fonder — but the friends don’t feel any anxiety about exchanging messages through their phones or laptops.

Now think about what happens when people who have struck up friendships online finally get together in the physical world. The meetings are usually approached with nervousness and trepidation. Will we hit it off? Will we still like each other when we’re sitting at a table together? Who is this person, anyway?

The anxiety that virtual friends feel when they’re about to meet in person is telling. It reveals the fragility, the sparseness, of disembodied relationships. It makes plain that we don’t feel we really know another person until we’ve met him or her in the flesh. Screen presence leaves a lot of room for fantasizing, for projecting the self into the other; physical presence is more solid, more filled in — and, yes, more real. “Some kinds of love are mistaken for vision,” Reed sang in that same song. And some visions are mistaken for love.

Photo by John.

2 Comments

Filed under Uncategorized

Guided by satellites

I will be in Cambridge, Mass., this afternoon to give a talk entitled “The World Is Not the Screen: How Computers Shape Our Sense of Place.” It is part of the ongoing Navigation Lecture Series presented by Harvard’s Radcliffe  Institute for Advanced Study. The talk starts at five, and is free and open to the public. So if you’re in the neighborhood, please come by. Details are here.

3 Comments

Filed under Uncategorized

Just press send

blindfolded

We’ve been getting a little lesson in what human-factors boffins call “automation complacency” over the last couple of days. Google apparently made some change to the autosuggest algorithm in Gmail over the weekend, and the program started inserting unusual email addresses into the “To” field of messages. As Business Insider explained, “Instead of auto-completing to the most-used contact when people start typing a name into the ‘To’ field, it seems to be prioritizing contacts that they communicate with less frequently.”

Google quickly acknowledged the problem:

The glitch led to a flood of misdirected messages, as people pressed Send without bothering to check the computer’s work. “I got a bunch of emails yesterday that were clearly not meant for me,” blogged venture capitalist Fred Wilson on Monday. Gmail users flocked to Twitter to confess to shooting messages to the wrong people. “My mum just got my VP biz dev’s expense report,” tweeted Pingup CEO Mark Slater. “She was not happy.” Wrote CloudFlare founder Matthew Prince, “It’s become pathological.”

The bug may lie in the machine, but the pathology actually lies in the user. Automation complacency happens all the time when computers take over tasks from people. System operators place so much trust in the software that they start to zone out. They assume that the computer will perform flawlessly in all circumstances. When the computer fails or makes a mistake, the error goes unnoticed and uncorrected — until too late.

Researchers Raja Parasuraman and Dietrich Manzey described the phenomenon in a 2010 article in Human Factors:

Automation complacency — operationally defined as poorer detection of system malfunctions under automation compared with under manual control — is typically found under conditions of multiple-task load, when manual tasks compete with the automated task for the operator’s attention. … Experience and practice do not appear to mitigate automation complacency: Skilled pilots and controllers exhibit the effect, and additional task practice in naive operators does not eliminate complacency. It is possible that specific experience in automation failures may reduce the extent of the effect. Automation complacency can be understood in terms of an attention allocation strategy whereby the operator’s manual tasks are attended to at the expense of the automated task, a strategy that may be driven by initial high trust in the automation.

In the worst cases, automation complacency can result in planes crashing on runways, school buses smashing into overpasses, or cruise ships running aground on sandbars. Sending an email to your mom instead of a colleague seems pretty trivial by comparison. But it’s a symptom of the same ailment, an ailment that we’ll be seeing a lot more of as we rush to hand ever more jobs and chores over to computers.

5 Comments

Filed under The Glass Cage

Brains, real and metaphorical

magicbrain

A few highlights from Lee Gomes’s long, lucid interview with Facebook’s artificial-intelligence chief Yann LeCun in IEEE Spectrum:

Gomes: We read about Deep Learning in the news a lot these days. What’s your least favorite definition of the term that you see in these stories?

LeCun: My least favorite description is, “It works just like the brain.” I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does. And describing it like the brain gives a bit of the aura of magic to it, which is dangerous. It leads to hype; people claim things that are not true. AI has gone through a number of AI winters because people claimed things they couldn’t deliver.

Gomes: You seem to take pains to distance your work from neuroscience and biology. For example, you talk about “convolutional nets,” and not “convolutional neural nets.” And you talk about “units” in your algorithms, and not “neurons.”

LeCun: That’s true. Some aspects of our models are inspired by neuroscience, but many components are not at all inspired by neuroscience, and instead come from theory, intuition, or empirical exploration. Our models do not aspire to be models of the brain, and we don’t make claims of neural relevance.

Gomes: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?

LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.

Gomes: You yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.

LeCun: Not anytime soon.

3 Comments

Filed under Uncategorized

Peak code?

Hackathon

“Will human replacement — the production by ourselves of ever better substitutes for ourselves — deliver an economic utopia with smart machines satisfying our every material need? Or will our self-induced redundancy leave us earning too little to purchase the products our smart machines can make?” So ask three Boston University economists, Seth Benzell, Laurence Kotlikoff, and Guillermo LaGarda, and Columbia’s Jeffrey Sachs. In an attempt to answer the question, the researchers turned to — what else? — a computer. They programmed a “bare-bones” model of the economy, featuring high-tech workers (who produce software) and low-tech workers (who produce services), and let the simulation run under different sets of variables.

The results were, as the economists put it in a new paper on the experiment, “disturbing.” The simulation suggests that “technological progress can be immiserating” and that even talented software programmers may face tough times in an ever more automated economy. The reason lies in the durability and reusability of software. Code is not used up; it accumulates. As the cost of deploying software for productive work (ie, the cost of automation) goes down, demand for new code spikes, bringing lots of new programmers into the labor market. The generous compensation provided to the programmers leads at first to higher savings and capital formation, fueling the boom. But “over time,” the model reveals, “as the stock of legacy code grows, the demand for new code, and thus for high-tech workers, falls.”

As a simple illustration, the authors point to the development of a robotic chess player. Once you have a robot that can outperform all human players, the incentive for programming new robotic players drops sharply. This is something we’ve already seen, as the authors point out: “Take Junior – the reigning World Computer Chess Champion. Junior can beat every current and, possibly, every future human on the planet. Consequently, his old code has largely put new chess programmers out of business.” Once any program reaches a superhuman level of productivity in a task, the incentive to invest in further, marginal gains falls.

The authors lay out the resulting economic dynamic:

The increase in [the code retention rate] initially raises the compensation of code-writing high-tech workers. This draws more high-tech workers into code-writing, thereby raising high-tech worker compensation … Things change over time. As more durable code comes on line, the marginal productivity of code falls, making new code writers increasingly redundant. Eventually the demand for code-writing high-tech workers is limited to those needed to cover the depreciation of legacy code, i.e., to retain, maintain, and update legacy code. The remaining high-tech workers find themselves working in the service sector [and pushing down wages in those occupations]. The upshot is that high-tech workers can end up potentially earning far less than in the [model’s] initial steady state.

As usable code stocks swell, the model indicates, we will at some point pass the cycle’s point of peak code — the moment of maximum demand for new code — and the prospects for employment in programming will begin to decline. Code boom will turn to code bust. (The bust will be even deeper, the economists found, if software is distributed as open source and hence made easier to share.) Even though high-tech workers “start out earning far more than low-tech workers,” they “end up earning far less.”

One thing the economists don’t seem to account for is the automation of programming itself, particularly the use of software to perform many of the tasks necessary to maintain, update, and redeploy legacy code. The automation of coding, which would be encouraged as programmers’ wages increase during the boom period, would likely deepen the bust even further.

Computer models of complex systems are always simplifications, of course, but this study serves to raise important and complicated questions about the long-run demand for programmers. It’s become popular to suggest that all kids should be taught to code as part of their education. That way, the theory goes, they’ll be assured of good jobs in an ever more computerized economy. This study calls into question that hopeful assumption. There can be a glut of coders just as there can be a glut of code.

Image of hackathon: Wikipedia.

6 Comments

Filed under Uncategorized

@Gilligan #Franzen #Facebook #TV

tvsets

From Susan Lerner’s interview with Jonathan Franzen in Booth:

SL: I want to ask you about technology and social media. … I was wondering, given your change of heart about television and its place within our culture, can you comment on this conversion and the possibility that social media might also one day redeem itself?

JF: TV redeemed itself by becoming more like the novel, which is to say: interested in sustained, morally complex narrative that is compelling and enjoyable. How that happens with pictures of you and your friends at T. G. I. Friday’s isn’t clear to me. Twitter isn’t even trying to be a narrative form. Its structure is antithetical to sustained and carefully considered story-telling. How does a structure like that suddenly turn itself into narrative art? You could say, well, Gilligan’s Island wasn’t art, either. But Gilligan’s Island paved the way, by being twenty-two minutes of a narrative, however dumb, to the twenty-two minutes of Nurse Jackie. 

SL: You see a trajectory?

JF: Yes, you can see the trajectory there. Which is the same trajectory that the novel itself followed. There was a lot of really bad experimentation in the seventeenth century as we were trying to work out these fundamental problems of “Is this narrative pretending to be true? Is it acknowledging that it’s not true? Are novels only about fantastical things? Where does everyday life fit in?” There were a couple of centuries of sorting that out before the novel really got going in Richardson and Fielding, and then, soon after, culminating in Austen. You can see that maturation in movies as well. You had Birth of a Nation before you had The Rules of the Game. It takes a while for artistic media to mature—I take that point—but I don’t know anyone who thinks that social media is an artistic medium. It’s more like another phone, home movies, email, whatever. It’s like a better version of the way people socially interacted in the past, a more technologically advanced version. But if you use your Facebook page to publish chapters of a novel, what you get is a novel, not Facebook. It’s a struggle to imagine what value is added by the technology itself.

SL: I think there’s an argument that can be made about new technology providing different forms and twists on established ideas, so people can examine—

JF: I’m just looking at the phenomenology of this technology in everyday life.

SL: Pictures of desserts.

JF: Yeah, pictures of desserts and the fact that you can’t sit still for five minutes without sending and receiving texts. I mean, it does not look like any form of engagement with art that I recognize from any field. It looks like a distraction and an addiction and a tool. A useful tool. I’m not a technophobe. I’m on the internet all day, every day, except when I’m actually trying to write, and even then I’m on a computer and using, often, material that I’ve taken from the internet. It’s not that I have technophobia. It’s the notion that somehow this is a transformative, liberating thing that I take issue with, when it seems to me more like a perfection of the free market’s infiltration of every aspect of a human being’s waking life.

It’s interesting — this is an aside — how deeply Gilligan’s Island managed to engrave itself into the cultural worldview of a certain generation of Americans. Despite its surface dumbness, the show, I would suggest, carries a mythical weight, what with the totemic quality of the characters — scientist, celebrity, tycoon, seafarer, etc. — and the Promethean nature of the plot.

O, unscepter’d isle, demi-paradise, demi-hell!

2 Comments

Filed under Uncategorized