Is software de-skilling programmers?

eclipse1

One of the themes of “The Great Forgetting,” my essay in the new Atlantic, is the spread of de-skilling into the professional work force. Through the nineteenth and twentieth centuries, the mechanization of industry led to the de-skilling of many manual trades, turning craftsmen into machine operators. As software automates intellectual labor, there are signs that a similar trend is influencing white collar workers, from accountants to lawyers.

Software writers themselves don’t seem immune from the new de-skilling wave. The longtime Google programmer Vivek Haldar, responding to my essay on his personal blog, writes of the danger of de-skilling inherent in modern integrated development environments (IDEs) like Eclipse and Visual Studio. IDEs automate many routine coding tasks, and as they’ve grown more sophisticated they’ve taken on higher-level tasks like restructuring, or “refactoring,” code:

Modern IDEs are getting “helpful” enough that at times I feel like an IDE operator rather than a programmer. They have support for advanced refactoring. Linters can now tell you about design issues and code smells. The behavior all these tools encourage is not “think deeply about your code and write it carefully”, but “just write a crappy first draft of your code, and then the tools will tell you not just what’s wrong with it, but also how to make it better.”

Haldar is not dismissing the benefits of IDEs, which, he argues, can lead to “a cleaner codebase” as well as greater productivity. His comments point to the essential tension that has always characterized technological de-skilling: the very real benefits of labor-saving technology come at the cost of a loss of human talent. The hard challenge is knowing where to draw the line—or just realizing that there is a line to be drawn.

Photo by Nathan Bergey.

19 Comments

Filed under Uncategorized

Tote that barge, tweet that tweet

shepherd

John Maynard Keynes believed that labor-saving technology would eventually create a utopia of leisure. (The date he had in mind was 2030.) Relieved of our narrow, demeaning jobs, we’d enjoy a wealth of pastimes. Marx, earlier, had a similar dream: “In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.” Sign me up!

Ian Bogost suggests that what modern technology might be creating is a kind of parody of that utopia — a Rube Goldbergian treadmill of small, neverending tasks. The regulation of online production is turning us into jittery information-processing generalists, jacks of all media trades. We’re all “hyperemployed,” whether we’re earning a decent wage or not:

Increasingly, online life in general feels like this. The endless, constant flow of email, notifications, direct messages, favorites, invitations. After that daybreak email triage, so many other icons on your phone boast badges silently enumerating their demands. Facebook notifications. Twitter @-messages, direct messages. Tumblr followers, Instagram favorites, Vine comments. Elsewhere too: comments on your blog, on your YouTube channel … New messages in the forums you frequent. Your Kickstarter campaign updates. Your Etsy shop. Your Ebay watch list. And then, of course, more email. Always more email. …

Even if there is more than a modicum of exploitation at work in the hyperemployment economy, the despair and overwhelm of online life doesn’t derive from that exploitation—not directly anyway. Rather, it’s a type of exhaustion cut of the same sort that afflicts the underemployed as well … The economic impact of hyperemployment is obviously different from that of underemployment, but some of the same emotional toll imbues both: a sense of inundation, of being trounced by demands whose completion yields only their continuance, and a feeling of resignation that any other scenario is likely or even possible.

They gave us utopia, but they forgot the fishing rods.

2 Comments

Filed under Uncategorized

Peak ebook?

I’ve been documenting the recent, surprisingly sharp decline in ebook sales growth. The falloff has continued through the first half of this year, with ebooks now showing clear signs of “stagnating” at about 25 percent of the overall U.S. book market, according to Digital Book World: “Once thought destined to reach 50% or 80% of all book buying and reading in the U.S., ebooks have stalled out on their way up to higher altitude.”

DBW bases that conclusion on a new study by the Book Industry Study Group, a publishing trade association, which uses data from Nielsen Book Research. The study shows that “for the past year or so, the share of all new ebooks sold — both in units and dollars — has been flat at about 30% and just under 15%, respectively.” A DBW chart drawn from the Nielsen numbers indicates that e-books actually lost some market share during the second quarter of this year (a trend also seen in recent sales reports from the Association of American Publishers):

ebook-share-of-new-books-sold

 

The Nielsen data also reveals “a slow decline in the number of people who exclusively buy e-books.” Comments Nielsen’s Jo Henry: “It is clear from four annual research surveys that e-books are in the later stages of the innovation curve and have settled into reasonably predictable consumption patterns.”

Maybe this is just an anomaly and ebooks will eventually gain a second wind and start taking more share from printed books. Right now, though, it’s looking as though there’s a Gutenberg Firewall — and that ebooks have hit it.

16 Comments

Filed under Uncategorized

All the world’s a screen

contempt

Do you wear Google Glass, or does Google Glass wear you?

That question came to the fore on October 15, when the U.S. government granted Google a broad patent for “hand gestures to signify what is important.” Now, don’t panic. You’re not going to be required to ask Google’s permission before pointing or clapping or high-fiving or fist-pumping. The gestures governed by the patent are limited to those “used to provide user input to a wearable computing device.” They are, in particular, hand motions that the company envisions will help people use Glass and other head-mounted computers.

One of the challenges presented by devices like Glass is the lack of flexible input devices. Desktops have keyboards and mice. Laptops have touchpads. Smartphones, tablets, and other touchscreen devices have the user’s fingers. How do you send instructions to a computer that takes the form of a pair of glasses? How do you operate its apps? You can move your head around — Glass has a motion sensor — but that accomplishes only so much. There aren’t all that many ways you can waggle your noggin, and none of them are particularly precise. But Glass does have a camera, and the camera can be programmed to recognize particular hand gestures and translate them into instructions for software applications.

To take a particularly literal-minded example, you can frame an object inside a heart formed by your thumbs and fingers in order to register your approval of or fondness for the object. The effect would be the same as clicking a Like button. Google, in its patent filing, provides a couple of illustrations:

heart

You can also “select” some part of the landscape by making a “lasso” gesture with your finger:

lasso

You can also do the machine-gun thing with your thumb and index finger, though Google is coy about exactly what that might mean:

shooting

The above illustration is described only as “an example user who is depicted as making an example hand gesture, according to an example embodiment.”

Along with excitement and curiosity, the prospect of Glass’s arrival in the mass market has provoked trepidation, stemming mainly from the documentary possibilities of the device’s tiny camera. Some people are nervous about a further loss of privacy and a further expansion of surveillance should the multitudes begin having network-connected cameras strapped to their foreheads. The camera, the patent makes clear, plays a data-input role as well as a documentary one, and the use of jerky hand motions to control Glass and its apps should  be cause for a little added anxiety, if only for the dubious aesthetic merits of having people walking around making weird gestures all the time.

But there’s something deeper going on here. Glass turns the human body into a computer input device more fully and more publicly than anything we’ve seen before — more so than even Microsoft’s Kinect. Kinect focuses a fixed camera on the user, while Glass focuses a mobile camera outward from the user. It’s an important distinction. With Glass, a person’s gaze becomes a computer cursor, with the focus of the gaze also directing the focus of the computer. What that also means is that the person’s surroundings  effectively become a computer display. The world becomes an array of data that can be manipulated by both the cursory gaze of its wearer and the input signals sent by the wearer’s hand gestures. This fulfills the ultimate dream of those who desire, for reasons of ideology or financial gain or both, to make all of existence “machine-readable,” to turn all of reality into a store of digital data. The role of the computer as mediator of experience becomes universal and seamless. Whereas “virtual reality” provided us with a simulation of the real that remained separate from the real, Glass turns the real into a simulation of itself.

“Gradually,” the CUNY media scholar Lev Manovich wrote nearly twenty years ago, “cinema taught us to accept the manipulation of time and space, the arbitrary coding of the visible, the mechanization of vision, and the reduction of reality to a moving image as a given. As a result, today the conceptual shock of the digital revolution is not experienced as a real shock — because we were ready for it for a long time.” We are now, in a similar way, prepared for the further revolution of Glass and its radical transformation of the real. Under the misleading slogan of “reality augmentation,” we are set — should this be the course we choose — to undergo “reality reduction,” as the world becomes a computer display and its sensory richness fades even further away.

Image: still from Contempt by Jean-Luc Godard.

2 Comments

Filed under Uncategorized

Clifford Nass, RIP

I just heard, through Matt Richtel’s appreciation, the very sad news that Clifford Nass died over the weekend of a heart attack. He was 55 and, as a colleague of his at Stanford said, “at the peak of his career.” I found Cliff’s research on human-computer interaction to be invaluable, and I had the great pleasure of meeting him once and talking with him several times. He and his work will be missed.

Comments Off

Filed under Uncategorized

A lesson in strategic patience

ND13_Page_043_01_

Not being a teacher myself, I’m wary of handing out pedagogical advice. (I suspect that classrooms are more complicated places than I can imagine.) But this article by Harvard art history professor Jennifer L. Roberts, adapted from a talk she gave at an educational conference earlier this year, is something that I hope a lot of teachers will read. It begins with Roberts noting a recent shift in the way she plans her lessons:

During the past few years, I have begun to feel that I need to take a more active role in shaping the temporal experiences of the students in my courses; that in the process of designing a syllabus I need not only to select readings, choose topics, and organize the sequence of material, but also to engineer, in a conscientious and explicit way, the pace and tempo of the learning experiences. When will students work quickly? When slowly? When will they be expected to offer spontaneous responses, and when will they be expected to spend time in deeper contemplation?

She offers a remarkable example of how she goes about engineering experiences “on the slow end of this tempo spectrum”:

In all of my art history courses, graduate and undergraduate, every student is expected to write an intensive research paper based on a single work of art of their own choosing. And the first thing I ask them to do in the research process is to spend a painfully long time looking at that object. Say a student wanted to explore the work popularly known as Boy with a Squirrel, painted in Boston in 1765 by the young artist John Singleton Copley. Before doing any research in books or online, the student would first be expected to go to the Museum of Fine Arts, where it hangs, and spend three full hours looking at the painting, noting down his or her evolving observations as well as the questions and speculations that arise from those observations. The time span is explicitly designed to seem excessive. Also crucial to the exercise is the museum or archive setting, which removes the student from his or her everyday surroundings and distractions.

At first many of the students resist being subjected to such a remedial exercise. How can there possibly be three hours’ worth of incident and information on this small surface? How can there possibly be three hours’ worth of things to see and think about in a single work of art? But after doing the assignment, students repeatedly tell me that they have been astonished by the potentials this process unlocked. … What this exercise shows students is that just because you have looked at something doesn’t mean that you have seen it. Just because something is available instantly to vision does not mean that it is available instantly to consciousness. Or, in slightly more general terms: access is not synonymous with learning. What turns access into learning is time and strategic patience.

Seriously, you should read the whole thing, even if you’re not a teacher. You’ll learn the backstory of Boy with a Squirrel, which, Roberts argues, “is an embodiment of the delays that it was created to endure” and hints at the way “the very fabric of human understanding was woven to some extent out of delay, belatedness, waiting.”

5 Comments

Filed under Uncategorized

The private and the public

4375554642_5a9be7b01c

Different people will set the line between the private and the public in different places. Different societies will as well. As Evgeny Morozov writes in an excellent essay in the new issue of Technology Review, the growing ability of corporations, governments, and individuals to use computers to collect and analyze data on personal behavior has for many years now created social pressure to move the line ever more toward the public, squeezing the realm of the private. If some public good can be attributed to, or even anticipated from, an expansion in the collection of personal data — an increase in efficiency or safety, say — it becomes difficult to argue against that expansion. Privacy advocates become marginalized, as they attempt to defend an abstract good against a practical and measurable one.

As the trend continues, the outputs of data-analysis programs begin to shape public policy. What’s been termed “algorithmic regulation” takes the place of public debate. Policy decisions, and even personal ones, start to be automated, and the individual begins to be disenfranchised. Morozov quotes from a perceptive 1985 lecture by Spiros Simitis: “Where privacy is dismantled, both the chance for personal assessment of the political … process and the opportunity to develop and maintain a particular style of life fade.” The pursuit of transparency, paradoxically, ends up making society’s workings more opaque to its citizens. Comments Morozov:

In case after case, Simitis argued, we stood to lose. Instead of getting more context for decisions, we would get less; instead of seeing the logic driving our bureaucratic systems and making that logic more accurate and less Kafkaesque, we would get more confusion because decision making was becoming automated and no one knew how exactly the algorithms worked. We would perceive a murkier picture of what makes our social institutions work; despite the promise of greater personalization and empowerment, the interactive systems would provide only an illusion of more participation. As a result, “interactive systems … suggest individual activity where in fact no more than stereotyped reactions occur.”

Simitis offered a particularly prescient assessment of the kind of polity that would ultimately emerge from this trend:

Habits, activities, and preferences are compiled, registered, and retrieved to facilitate better adjustment, not to improve the individual’s capacity to act and to decide. Whatever the original incentive for computerization may have been, processing increasingly appears as the ideal means to adapt an individual to a predetermined, standardized behavior that aims at the highest possible degree of compliance with the model patient, consumer, taxpayer, employee, or citizen.

Morozov goes on to explore the insidious effects of what he terms “the invisible barbed wire of big data,” and he argues, compellingly, that those effects can be tempered only through informed political debate, not through technological fixes.

I have only one quibble with Morozov’s argument. He declares that “privacy is not an end in itself” but rather “a means of achieving a certain ideal of democratic politics.” That strikes me as an overstatement. In claiming that the private can only be justified by its public benefits, Morozov displays the sensibility that he criticizes. I agree wholeheartedly that privacy is a means to a social end — to an ideal of democratic politics — but I think it is also an end in itself, or, to be more precise, it is a means to important personal as well as public ends. A sense of privacy is essential to the exploration and formation of the self, just as it’s essential to civic participation and political debate.

Photo by Alexandre Dulaunoy.

10 Comments

Filed under Uncategorized