Peak code?

Hackathon

“Will human replacement — the production by ourselves of ever better substitutes for ourselves — deliver an economic utopia with smart machines satisfying our every material need? Or will our self-induced redundancy leave us earning too little to purchase the products our smart machines can make?” So ask three Boston University economists, Seth Benzell, Laurence Kotlikoff, and Guillermo LaGarda, and Columbia’s Jeffrey Sachs. In an attempt to answer the question, the researchers turned to — what else? — a computer. They programmed a “bare-bones” model of the economy, featuring high-tech workers (who produce software) and low-tech workers (who produce services), and let the simulation run under different sets of variables.

The results were, as the economists put it in a new paper on the experiment, “disturbing.” The simulation suggests that “technological progress can be immiserating” and that even talented software programmers may face tough times in an ever more automated economy. The reason lies in the durability and reusability of software. Code is not used up; it accumulates. As the cost of deploying software for productive work (ie, the cost of automation) goes down, demand for new code spikes, bringing lots of new programmers into the labor market. The generous compensation provided to the programmers leads at first to higher savings and capital formation, fueling the boom. But “over time,” the model reveals, “as the stock of legacy code grows, the demand for new code, and thus for high-tech workers, falls.”

As a simple illustration, the authors point to the development of a robotic chess player. Once you have a robot that can outperform all human players, the incentive for programming new robotic players drops sharply. This is something we’ve already seen, as the authors point out: “Take Junior – the reigning World Computer Chess Champion. Junior can beat every current and, possibly, every future human on the planet. Consequently, his old code has largely put new chess programmers out of business.” Once any program reaches a superhuman level of productivity in a task, the incentive to invest in further, marginal gains falls.

The authors lay out the resulting economic dynamic:

The increase in [the code retention rate] initially raises the compensation of code-writing high-tech workers. This draws more high-tech workers into code-writing, thereby raising high-tech worker compensation … Things change over time. As more durable code comes on line, the marginal productivity of code falls, making new code writers increasingly redundant. Eventually the demand for code-writing high-tech workers is limited to those needed to cover the depreciation of legacy code, i.e., to retain, maintain, and update legacy code. The remaining high-tech workers find themselves working in the service sector [and pushing down wages in those occupations]. The upshot is that high-tech workers can end up potentially earning far less than in the [model’s] initial steady state.

As usable code stocks swell, the model indicates, we will at some point pass the cycle’s point of peak code — the moment of maximum demand for new code — and the prospects for employment in programming will begin to decline. Code boom will turn to code bust. (The bust will be even deeper, the economists found, if software is distributed as open source and hence made easier to share.) Even though high-tech workers “start out earning far more than low-tech workers,” they “end up earning far less.”

One thing the economists don’t seem to account for is the automation of programming itself, particularly the use of software to perform many of the tasks necessary to maintain, update, and redeploy legacy code. The automation of coding, which would be encouraged as programmers’ wages increase during the boom period, would likely deepen the bust even further.

Computer models of complex systems are always simplifications, of course, but this study serves to raise important and complicated questions about the long-run demand for programmers. It’s become popular to suggest that all kids should be taught to code as part of their education. That way, the theory goes, they’ll be assured of good jobs in an ever more computerized economy. This study calls into question that hopeful assumption. There can be a glut of coders just as there can be a glut of code.

Image of hackathon: Wikipedia.

@Gilligan #Franzen #Facebook #TV

tvsets

From Susan Lerner’s interview with Jonathan Franzen in Booth:

SL: I want to ask you about technology and social media. … I was wondering, given your change of heart about television and its place within our culture, can you comment on this conversion and the possibility that social media might also one day redeem itself?

JF: TV redeemed itself by becoming more like the novel, which is to say: interested in sustained, morally complex narrative that is compelling and enjoyable. How that happens with pictures of you and your friends at T. G. I. Friday’s isn’t clear to me. Twitter isn’t even trying to be a narrative form. Its structure is antithetical to sustained and carefully considered story-telling. How does a structure like that suddenly turn itself into narrative art? You could say, well, Gilligan’s Island wasn’t art, either. But Gilligan’s Island paved the way, by being twenty-two minutes of a narrative, however dumb, to the twenty-two minutes of Nurse Jackie. 

SL: You see a trajectory?

JF: Yes, you can see the trajectory there. Which is the same trajectory that the novel itself followed. There was a lot of really bad experimentation in the seventeenth century as we were trying to work out these fundamental problems of “Is this narrative pretending to be true? Is it acknowledging that it’s not true? Are novels only about fantastical things? Where does everyday life fit in?” There were a couple of centuries of sorting that out before the novel really got going in Richardson and Fielding, and then, soon after, culminating in Austen. You can see that maturation in movies as well. You had Birth of a Nation before you had The Rules of the Game. It takes a while for artistic media to mature—I take that point—but I don’t know anyone who thinks that social media is an artistic medium. It’s more like another phone, home movies, email, whatever. It’s like a better version of the way people socially interacted in the past, a more technologically advanced version. But if you use your Facebook page to publish chapters of a novel, what you get is a novel, not Facebook. It’s a struggle to imagine what value is added by the technology itself.

SL: I think there’s an argument that can be made about new technology providing different forms and twists on established ideas, so people can examine—

JF: I’m just looking at the phenomenology of this technology in everyday life.

SL: Pictures of desserts.

JF: Yeah, pictures of desserts and the fact that you can’t sit still for five minutes without sending and receiving texts. I mean, it does not look like any form of engagement with art that I recognize from any field. It looks like a distraction and an addiction and a tool. A useful tool. I’m not a technophobe. I’m on the internet all day, every day, except when I’m actually trying to write, and even then I’m on a computer and using, often, material that I’ve taken from the internet. It’s not that I have technophobia. It’s the notion that somehow this is a transformative, liberating thing that I take issue with, when it seems to me more like a perfection of the free market’s infiltration of every aspect of a human being’s waking life.

It’s interesting — this is an aside — how deeply Gilligan’s Island managed to engrave itself into the cultural worldview of a certain generation of Americans. Despite its surface dumbness, the show, I would suggest, carries a mythical weight, what with the totemic quality of the characters — scientist, celebrity, tycoon, seafarer, etc. — and the Promethean nature of the plot.

O, unscepter’d isle, demi-paradise, demi-hell!

Where will driverless cars drive us?

robocars

I have an article in Fortune that looks at the hype surrounding autonomous cars. Here’s a bit from the piece highlighting recent research that calls into question some of the common assumptions about robotic vehicles:

At a car conference last September, Steven Shladover, a research engineer at the University of California at Berkeley, explained that automotive automation presents far more daunting challenges than aircraft automation. Cars travel much closer together than planes do, they have less room to maneuver in emergencies, and drivers have to deal with a welter of earthly obstacles, from jaywalkers to work crews to potholes. Developing a driverless car, Shladover said, will be orders of magnitude harder than developing a pilotless airliner. It’s going to be a long time, he cautioned, before we’ll be able to curl up in the back seat while a robot drives us to work.

Even if perfect automation remains beyond our reach, progress in automotive robotics will sprint forward. Top-end luxury cars are already highly automated, able to center themselves in a lane and adjust their speed to fit traffic conditions, and computers are set to take over many more driving tasks in the years ahead. As always, though, the road to the future will have many twists and forks. The choices that companies and designers make in automating cars will influence not only how we drive but how we live. As we learned in the last century, advances in personal-transportation technologies can have profound consequences for everything from housing to urban planning to energy policy.

Consider safety. It’s often assumed that automation will reduce traffic accidents, if not eliminate them entirely. But that’s not necessarily the case. Research into human-computer interaction reveals that partial automation can actually make complex tasks like driving more dangerous. People relying on automation quickly become complacent, trusting the computer to perform flawlessly, and that raises the odds that they’ll make mistakes when they have to reengage with the work, particularly in an emergency. A study of drivers by U.K. scholars Neville Stanton and Mark Young found that while shifting routine driving chores to computers can reduce workload and stress, it also “lulls drivers into a false sense of security.” They lose “situational awareness,” which can have tragic consequences when split-second reactions are required to avoid an accident.

The risks will likely be magnified during the long transitional period when automobiles with varying degrees of automation share the road. Given that the average American passenger vehicle is more than 11 years old, there will be “at least a several-decade-long period during which conventional and self-driving vehicles would need to interact,” report Michael Sivak and Brandon Schoettle of the University of Michigan’s Transportation Research Institute. That becomes particularly problematic when you take into account driving’s complex social psychology. When we change lanes, enter traffic, or execute other tricky maneuvers, we tend to make quick, intuitive decisions based on our experience of how other drivers act. But all those deeply learned assumptions may no longer apply when the other driver is a robot. Just the loss of eye contact between human drivers, Sivak and Schoettle warn, could introduce new and unexpected risks, particularly for drivers of older, less automated cars.

Beyond the knotty technical questions are equally complicated social ones. Peter Norton, a transportation expert at the University of Virginia, points out that the way autonomous vehicles are designed will have a profound influence on people’s driving habits. If automation makes driving and parking easier, and in particular if it allows commuters to do other things while in their cars, it could end up encouraging people to drive more often or to commute over longer distances. Cities and suburbs would become even more congested, highway infrastructure would come under more stress, and investments in public transport might wither further. “If we rebuild the landscape for autonomous vehicles,” Norton writes, “we may make it unsuitable for anything else — including walking.”

On the other hand, if we design autonomous vehicles as part of a thoughtful overhaul of the nation’s transit systems, the new cars could play a part in reducing traffic, curtailing air pollution, and engendering more livable cities. It’s a mistake, Norton argues, to view autonomous cars in isolation, and it’s an even bigger mistake to assume that automotive automation will be a panacea for complex problems like traffic and safety. “Before we make autonomous cars the solution,” he says, “we must formulate the problem correctly.”

Image: US Department of Transportation.

The medium is the morality

prompt

In 1870, W. A. Rogers, a British bureaucrat in the Bombay Civil Service, wrote of the fortifying effect that modern transport systems were having on the character of the local populace:

Railways are opening the eyes of the people who are within reach of them in a variety of ways. They teach them that time is worth money, and induce them to economise that which they had been in the habit of slighting and wasting; they teach them that speed attained is time, and therefore money, saved or made. . . . Above all, they induce in them habits of self-dependence, causing them to act for themselves promptly and not lean on others.

The locomotive was a moral engine as well as a mechanical one. It carried people horizontally, across the land, but also vertically, up the ladder of enlightenment. As Russell Hittinger notes:

What is most striking about [Rogers’s] statement is that the machine is regarded as the proximate cause of the liberal virtues; habits of self-dependence are the effect of the application of a technology. The benighted peoples of the sub-continent are to be civilized, not by reading Cicero, not by conversion to the Church of England, not even by adopting the liberal faith, but by receiving the discipline of trains and clocks. The machine is both the exemplar and the proximate cause of individual and cultural perfection.

Tools are, whether by design or by accident, imbued with a certain moral character — they instruct us in how to act — and that in-built, artificial morality offers a readymade substitute for our own. The technology becomes an ethical guide. We embrace its morality as our own. An earlier and more dramatic example of such ethical transference came, as Hittinger suggests, in the form of the mechanical clock. Before the arrival of the time-keeping machine, life was largely “free of haste, careless of exactitude, unconcerned by productivity,” the historian Jacques Le Goff has written. With every tick, the new clock in the town square issued an indictment of such idleness and imprecision. It taught people that time was as measurable as money, something precious that could be wasted or lost. The clock became, to quote David Landes, “prod and key to personal achievement and productivity.”

Just as the clock and the railroad gave our forebears lessons in, and indeed models for, industriousness, thrift, and punctuality, so the computer today offers us its own character instruction. Its technical features are taken for ethical traits. Consider how the protocols of networking, the arcane codes that allow computers to exchange data and share resources, have become imbued with moral weight. The computer fulfills its potential, becomes a whole being, so to speak, only when it is connected to and actively communicating with other computers. An isolated computer is as bad as an idle computer. And the same goes for people. The sense of the computer network as a model for a moral society runs, with different emphases, through the work of such prominent and diverse thinkers as Yochai Benkler, David Weinberger, Clay Shirky, Steven Johnson, and Kevin Kelly. We, too, become whole beings only when we are connected. And if being connected is the ideal, then being disconnected becomes morally suspect. The loner, the recluse, the outsider, the solitary thinker, the romantic quester: all such individuals carry an ethical stain today that goes beyond mere unsociability — they are letting the rest of us down by not sharing, by not connecting. To be inaccessible to the network is to waste one’s social capital, a deadly sin.

But the computer goes even further than mechanical tools and systems in shaping our conception of virtue. It provides more than just a model. It offers us a means for “outsourcing” our ethical sense, as Evan Selinger and Thomas Seager put it. With the personal computer, we have an intimate machine, a technological companion and guru, that can automate the making of moral choices, that through its programming can prod us, nudge us, and otherwise lead us down the righteous path. Arianna Huffington celebrates the potential of the smartphone to provide a “GPS for the soul,” offering ethical “course corrections” as we go through the day.

In discussing the automation of moral choice, Hittinger draws a connection with the work of the historian Christopher Dawson, who in a 1960 lecture argued that modern technology, and the social order it both represents and underpins, has become “the real basis of secular culture”:

Modern technologies are not only “labor saving” devices. A labor saving device, like an automated farm implement or a piston, replaces repetitive human acts. But most distinctive of contemporary technology is the replacement of the human act; or, of what the scholastic philosophers called the actus humanus. The machine reorganizes and to some extent supplants the world of human action, in the moral sense of the term. … It is important to understand that Dawson’s criticism of technology is not aimed at the tool per se. His criticism has nothing to do with the older, and in our context, misleading notion of “labor saving” devices. Rather, it is aimed at a new cultural pattern in which tools are either deliberately designed to replace the human act, or at least have the unintended effect of making the human act unnecessary or subordinate to the machine.

Philosophy professor Joshua Hochschild goes further: “Automation makes us forget that we are moral agents.” When software code becomes moral code, moral code becomes meaningless.

An ear for an ear

Trio 2

In Vocal Apparitions, Michal Grover-Friedlander describes the origins of our modern communication network:

In 1874 Alexander Bell invented the first model of a phone receiver using an ear membrane taken from a human corpse’s ear. The first telephone receiver was, in fact, a human ear, a machine that transmitted a living human voice by way of a dead human’s ear.

“The words of a dead man,” wrote W. H. Auden, “Are modified in the guts of the living.” The reverse, it would seem, is also true.

Image: Still from “Blue Velvet.”

Maps, mind and memory

london map

In concert with the UK publication of The Glass Cage, Penguin Books’ Think Smarter site is running an article by me on satellite navigation. Titled “Welcome to Nowheresville,” it’s adapted from a chapter in the book called “World and Screen.” Here’s a taste of the piece:

A GPS device, by allowing us to get from point A to point B with the least possible effort and nuisance, can make our lives easier. But what it steals from us, when we turn to it too often, is the joy and satisfaction of apprehending the world around us — and of making that world a part of us. In his book Being Alive, Tim Ingold, an anthropologist at the University of Aberdeen, draws a distinction between two very different modes of travel: wayfaring and transport. Wayfaring, he explains, is “our most fundamental way of being in the world.” Immersed in the landscape, attuned to its textures and features, the wayfarer enjoys “an experience of movement in which action and perception are intimately coupled.” Wayfaring becomes “an ongoing process of growth and development, or self-renewal.” Transport, on the other hand, is “essentially destination-oriented.” It’s not so much a process of discovery “along a way of life” as a mere “carrying across, from location to location, of people and goods in such a way as to leave their basic natures unaffected.” In transport, the traveller doesn’t actually move in any meaningful way. “Rather, he is moved, becoming a passenger in his own body.”

Wayfaring is messier and less efficient than transport, which is why it has become a target for automation. “If you have a mobile phone with Google Maps,” says Michael Jones, an executive in Google’s mapping division, “you can go anywhere on the planet and have confidence that we can give you directions to get to where you want to go safely and easily.” As a result, he declares, “No human ever has to feel lost again.” That certainly sounds appealing, as if some basic problem in our existence had been solved forever. And it fits the Silicon Valley obsession with using software to rid people’s lives of “friction.” But the more you think about it, the more you realise that to never confront the possibility of getting lost is to live in a state of perpetual dislocation. If you never have to worry about not knowing where you are, then you never have to know where you are.

Read on.

Jonathan Swift’s smartphone

Evolution has engineered us for social interaction. Our bodies are instruments exquisitely tuned for tracking and measuring the auras of others. In quantifying ourselves, therefore, we also quantify those around us. This is the insight that underpins the brilliant new iPhone app pplkpr.

Connected to a sensor-equipped smart wristband, pplkpr takes biometric readings of how interactions with your Facebook friends, in person or screen-mediated, affect your physical and emotional state. pplkpr tells you, in hard, objective numbers, whether a friend makes you happy or sad, anxious or calm, aroused or enervated. It’s a flux capacitor for the soul.

pplkpr

What’s really cool about the app is how it makes the biometric data socially actionable. pplkpr doesn’t just give you “a breakdown of who’s affecting you most,” its developers say; it also “acts for you — inviting people to hang out, sending messages, or blocking or unfriending negative friends.” Bottom line: “It will automatically manage your relationships, so you don’t have to.” The next step, clearly, will be to aggregate the data, so you’ll be able to tell at a glance whether a would-be friend will add something meaningful to your life or just bum you out.

From its vowel-challenged name to its clinically infantile interface, pplkpr is of course a work of satire. It was developed by a pair of artists, with backing not from Kickstarter but from the Andy Warhol Foundation for the Visual Arts. The wonderful thing about the app is that it’s being taken seriously. The early reviews at the App Store are encouraging:

review

Among tech sites, the buzz is building. Techcrunch gives the app a straightfaced review, seeing a lot of upside:

Don’t know how you feel about someone in your life? By pairing a heart rate monitor with the pplkpr iOS app, you could soon find out. The app pairs up with any Bluetooth-enabled heart rate monitor to track your physical response around certain people in your life. Biofeedback from those devices log reactions such as joy, anger, sadness, and then uploads what it determines to be those emotional reactions to the app. …

The overall promise is to help you spend more time with those who contribute to your well-being and avoid those who stress you out. It does this in a way that aims to excuse you from having to make that sometimes difficult decision yourself. pplkpr doesn’t tell you if someone you meet has been blocked by others or if you are actually  the one stressing everyone else out, but it does provide a nice excuse to get away from someone.

And check out this glowing report from Fox News.

Even journalists who know it’s a joke can’t help but see genuine potential in its workings. Wired‘s Liz Stinson didn’t even crack a smile in covering the app today:

pplkpr lets you quantify the value of your relationships based on a few data streams. A heart rate wrist band measures the subtle changes in your heart rate, alerting you to spikes in stress or excitement. This biometric data is correlated with information you manually input about the people you’re hanging out with. Based on patterns, algorithms will determine whether you should be spending more time with a certain person or if you should cut him out altogether. …

Framed as art, pplkpr is granted the buffer of being a provocation or even satire, but it’s not outlandish to consider a reality where people will earnestly look to algorithms to make sense of how they feel. Implemented responsibly, that could be a positive thing — an objective set of eyes can help us see that a relationship is unhealthy.

I wouldn’t be surprised at this point to see Mark Zuckerberg buy pplkpr — for, say, $1.3 billion. It would hardly be the first time that satire proved prophetic.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.