Category Archives: Uncategorized

All the world’s a screen

contempt

Do you wear Google Glass, or does Google Glass wear you?

That question came to the fore on October 15, when the U.S. government granted Google a broad patent for “hand gestures to signify what is important.” Now, don’t panic. You’re not going to be required to ask Google’s permission before pointing or clapping or high-fiving or fist-pumping. The gestures governed by the patent are limited to those “used to provide user input to a wearable computing device.” They are, in particular, hand motions that the company envisions will help people use Glass and other head-mounted computers.

One of the challenges presented by devices like Glass is the lack of flexible input devices. Desktops have keyboards and mice. Laptops have touchpads. Smartphones, tablets, and other touchscreen devices have the user’s fingers. How do you send instructions to a computer that takes the form of a pair of glasses? How do you operate its apps? You can move your head around — Glass has a motion sensor — but that accomplishes only so much. There aren’t all that many ways you can waggle your noggin, and none of them are particularly precise. But Glass does have a camera, and the camera can be programmed to recognize particular hand gestures and translate them into instructions for software applications.

To take a particularly literal-minded example, you can frame an object inside a heart formed by your thumbs and fingers in order to register your approval of or fondness for the object. The effect would be the same as clicking a Like button. Google, in its patent filing, provides a couple of illustrations:

heart

You can also “select” some part of the landscape by making a “lasso” gesture with your finger:

lasso

You can also do the machine-gun thing with your thumb and index finger, though Google is coy about exactly what that might mean:

shooting

The above illustration is described only as “an example user who is depicted as making an example hand gesture, according to an example embodiment.”

Along with excitement and curiosity, the prospect of Glass’s arrival in the mass market has provoked trepidation, stemming mainly from the documentary possibilities of the device’s tiny camera. Some people are nervous about a further loss of privacy and a further expansion of surveillance should the multitudes begin having network-connected cameras strapped to their foreheads. The camera, the patent makes clear, plays a data-input role as well as a documentary one, and the use of jerky hand motions to control Glass and its apps should  be cause for a little added anxiety, if only for the dubious aesthetic merits of having people walking around making weird gestures all the time.

But there’s something deeper going on here. Glass turns the human body into a computer input device more fully and more publicly than anything we’ve seen before — more so than even Microsoft’s Kinect. Kinect focuses a fixed camera on the user, while Glass focuses a mobile camera outward from the user. It’s an important distinction. With Glass, a person’s gaze becomes a computer cursor, with the focus of the gaze also directing the focus of the computer. What that also means is that the person’s surroundings  effectively become a computer display. The world becomes an array of data that can be manipulated by both the cursory gaze of its wearer and the input signals sent by the wearer’s hand gestures. This fulfills the ultimate dream of those who desire, for reasons of ideology or financial gain or both, to make all of existence “machine-readable,” to turn all of reality into a store of digital data. The role of the computer as mediator of experience becomes universal and seamless. Whereas “virtual reality” provided us with a simulation of the real that remained separate from the real, Glass turns the real into a simulation of itself.

“Gradually,” the CUNY media scholar Lev Manovich wrote nearly twenty years ago, “cinema taught us to accept the manipulation of time and space, the arbitrary coding of the visible, the mechanization of vision, and the reduction of reality to a moving image as a given. As a result, today the conceptual shock of the digital revolution is not experienced as a real shock — because we were ready for it for a long time.” We are now, in a similar way, prepared for the further revolution of Glass and its radical transformation of the real. Under the misleading slogan of “reality augmentation,” we are set — should this be the course we choose — to undergo “reality reduction,” as the world becomes a computer display and its sensory richness fades even further away.

Image: still from Contempt by Jean-Luc Godard.

Clifford Nass, RIP

I just heard, through Matt Richtel’s appreciation, the very sad news that Clifford Nass died over the weekend of a heart attack. He was 55 and, as a colleague of his at Stanford said, “at the peak of his career.” I found Cliff’s research on human-computer interaction to be invaluable, and I had the great pleasure of meeting him once and talking with him several times. He and his work will be missed.

A lesson in strategic patience

ND13_Page_043_01_

Not being a teacher myself, I’m wary of handing out pedagogical advice. (I suspect that classrooms are more complicated places than I can imagine.) But this article by Harvard art history professor Jennifer L. Roberts, adapted from a talk she gave at an educational conference earlier this year, is something that I hope a lot of teachers will read. It begins with Roberts noting a recent shift in the way she plans her lessons:

During the past few years, I have begun to feel that I need to take a more active role in shaping the temporal experiences of the students in my courses; that in the process of designing a syllabus I need not only to select readings, choose topics, and organize the sequence of material, but also to engineer, in a conscientious and explicit way, the pace and tempo of the learning experiences. When will students work quickly? When slowly? When will they be expected to offer spontaneous responses, and when will they be expected to spend time in deeper contemplation?

She offers a remarkable example of how she goes about engineering experiences “on the slow end of this tempo spectrum”:

In all of my art history courses, graduate and undergraduate, every student is expected to write an intensive research paper based on a single work of art of their own choosing. And the first thing I ask them to do in the research process is to spend a painfully long time looking at that object. Say a student wanted to explore the work popularly known as Boy with a Squirrel, painted in Boston in 1765 by the young artist John Singleton Copley. Before doing any research in books or online, the student would first be expected to go to the Museum of Fine Arts, where it hangs, and spend three full hours looking at the painting, noting down his or her evolving observations as well as the questions and speculations that arise from those observations. The time span is explicitly designed to seem excessive. Also crucial to the exercise is the museum or archive setting, which removes the student from his or her everyday surroundings and distractions.

At first many of the students resist being subjected to such a remedial exercise. How can there possibly be three hours’ worth of incident and information on this small surface? How can there possibly be three hours’ worth of things to see and think about in a single work of art? But after doing the assignment, students repeatedly tell me that they have been astonished by the potentials this process unlocked. … What this exercise shows students is that just because you have looked at something doesn’t mean that you have seen it. Just because something is available instantly to vision does not mean that it is available instantly to consciousness. Or, in slightly more general terms: access is not synonymous with learning. What turns access into learning is time and strategic patience.

Seriously, you should read the whole thing, even if you’re not a teacher. You’ll learn the backstory of Boy with a Squirrel, which, Roberts argues, “is an embodiment of the delays that it was created to endure” and hints at the way “the very fabric of human understanding was woven to some extent out of delay, belatedness, waiting.”

The private and the public

4375554642_5a9be7b01c

Different people will set the line between the private and the public in different places. Different societies will as well. As Evgeny Morozov writes in an excellent essay in the new issue of Technology Review, the growing ability of corporations, governments, and individuals to use computers to collect and analyze data on personal behavior has for many years now created social pressure to move the line ever more toward the public, squeezing the realm of the private. If some public good can be attributed to, or even anticipated from, an expansion in the collection of personal data — an increase in efficiency or safety, say — it becomes difficult to argue against that expansion. Privacy advocates become marginalized, as they attempt to defend an abstract good against a practical and measurable one.

As the trend continues, the outputs of data-analysis programs begin to shape public policy. What’s been termed “algorithmic regulation” takes the place of public debate. Policy decisions, and even personal ones, start to be automated, and the individual begins to be disenfranchised. Morozov quotes from a perceptive 1985 lecture by Spiros Simitis: “Where privacy is dismantled, both the chance for personal assessment of the political … process and the opportunity to develop and maintain a particular style of life fade.” The pursuit of transparency, paradoxically, ends up making society’s workings more opaque to its citizens. Comments Morozov:

In case after case, Simitis argued, we stood to lose. Instead of getting more context for decisions, we would get less; instead of seeing the logic driving our bureaucratic systems and making that logic more accurate and less Kafkaesque, we would get more confusion because decision making was becoming automated and no one knew how exactly the algorithms worked. We would perceive a murkier picture of what makes our social institutions work; despite the promise of greater personalization and empowerment, the interactive systems would provide only an illusion of more participation. As a result, “interactive systems … suggest individual activity where in fact no more than stereotyped reactions occur.”

Simitis offered a particularly prescient assessment of the kind of polity that would ultimately emerge from this trend:

Habits, activities, and preferences are compiled, registered, and retrieved to facilitate better adjustment, not to improve the individual’s capacity to act and to decide. Whatever the original incentive for computerization may have been, processing increasingly appears as the ideal means to adapt an individual to a predetermined, standardized behavior that aims at the highest possible degree of compliance with the model patient, consumer, taxpayer, employee, or citizen.

Morozov goes on to explore the insidious effects of what he terms “the invisible barbed wire of big data,” and he argues, compellingly, that those effects can be tempered only through informed political debate, not through technological fixes.

I have only one quibble with Morozov’s argument. He declares that “privacy is not an end in itself” but rather “a means of achieving a certain ideal of democratic politics.” That strikes me as an overstatement. In claiming that the private can only be justified by its public benefits, Morozov displays the sensibility that he criticizes. I agree wholeheartedly that privacy is a means to a social end — to an ideal of democratic politics — but I think it is also an end in itself, or, to be more precise, it is a means to important personal as well as public ends. A sense of privacy is essential to the exploration and formation of the self, just as it’s essential to civic participation and political debate.

Photo by Alexandre Dulaunoy.

The mind at play

open awareness

I  have a review of Daniel Goleman’s new book, Focus, in the New York Times Sunday Book Review this week. It (the review) starts:

“Ineluctable modality of the visible.” So begin the musings of Stephen Dedalus as he walks along Sandymount Strand in the third chapter of James Joyce’s Ulys­ses.  “Signatures of all things I am here to read.” The chapter isn’t just a tour de force of prose writing. It’s an exquisitely sensitive depiction of a mind at play. Conscious of his own consciousness, Dedalus monitors his thoughts without reining them in. He’s at once focused and un­focused. Seemingly scattered ideas, sensations and memories coalesce into patterns, into art.

Brain researchers and Zen masters call this state of mind “open awareness” …

Read on.

Listen to me

Kiewit Computation Center 1966

I’ll be giving a couple of talks in the Northeast next week. Both are free and open to the public. On Monday, I’ll be in Hanover, New Hampshire, to give a lecture at my alma mater, Dartmouth College. (Details here.) And on Wednesday I’ll be in Buffalo to speak at Medaille College. (Details here.) If you’re in either area, please come by.

That fuzzy picture up there, incidentally, is of Dartmouth’s Kiewit Computation Center, where, in the late 1970s, I first touched a digital computer — more precisely, a terminal connected to the school’s mainframe time-sharing system. Kiewit was torn down in 2000.

Photo: Dartmouth College.

Frederick Taylor and the quantified self

stopwatch

The faithful gathered in San Francisco earlier this month for the Quantified Self 2013 Global Conference, an annual conclave of “self-trackers and tool-makers.” Founded by long-time technology writers Gary Wolf and Kevin Kelly, the Quantified Self, or QS, movement aims to bring the new apparatus of big data to the old pursuit of self-actualization, using sensors, wearables, apps, and the cloud to monitor and optimize bodily functions and design a more perfect self. “Instead of interrogating their inner worlds through talking and writing,” Wolf explains, trackers are seeking “self-knowledge through numbers.” He continues: “Behind the allure of the quantified self is a guess that many of our problems come from simply lacking the instruments to understand who we are.”

“Allure” may be an overstatement. A small band of enthusiasts is gung-ho for QS. But the masses, so far, have shown little interest in self-tracking, rarely going beyond the basic pedometer level of monitoring fitness regimes. Like meticulous calorie counting, self-tracking is hard to sustain. It gets boring quickly, and the numbers are more likely to breed anxiety than contentment. There’s a reason the body keeps its vagaries out of the conscious mind.

But, as management researcher H. James Wilson reports in the Wall Street Journal, there is one area where self-tracking is beginning to be pursued with vigor: business operations. Some companies are outfitting employees with wearable computers and other self-tracking gadgets in order to “gather subtle data about how they move and act — and then use that information to help them do their jobs better.” There is, for example, the Hitachi Business Microscope, which office workers wear on a lanyard around their neck. “The device is packed with sensors that monitor things like how workers move and speak, as well as environmental factors like light and temperature. So, it can track where workers travel in an office, and recognize whom they’re talking to by communicating with other people’s badges. It can also measure how well they’re talking to them — by recording things like how often they make hand gestures and nod, and the energy level in their voice.” Other companies are developing Google Glass-style “smart glasses” to accomplish similar things.

A little more than a century ago, Frederick Winslow Taylor introduced “scientific management” to American factories. By meticulously tracking and measuring the physical movements of manufacturing workers as they went through their tasks, Taylor counseled, companies could determine the “one best way” to do any job and then enforce that protocol on all other workers. Through the systematic collection of data, industry could be optimized, operated as a perfectly calibrated machine. “In the past the man has been first,” declared Taylor; “in the future the system must be first.”

The goals and mechanics of the Quantified Self movement, when applied in business settings, not only bring back the ethic of Taylorism, but extend Taylorism’s reach into the white-collar workforce. The dream of perfect optimization reaches into the intimate realm of personal affiliation and conversation among colleagues. One thing that Taylor’s system aided was the mechanization of factory work. Once you had turned the jobs of human workers into numbers, it turned out, you also had a good template for replacing those workers with machines. It seems that the new Taylorism might accomplish something similar for knowledge work. It provides the specs for software applications that can take over the jobs of even highly educated professionals.

One can  imagine other ways QS might be productively applied in the commercial realm. Automobile insurers already give policy holders an incentive for installing tracking sensors in their cars to monitor their driving habits. It seems only logical for health and life insurers to provide similar incentives for policy holders who wear body sensors. Premiums could then be adjusted based on, say, a person’s cholesterol or blood sugar levels, or food intake, or even the areas they travel in or the people they associate with — anything that correlates with risk of illness or death. (Rough Type readers will remember that this is a goal that Yahoo director Max Levchin is actively pursuing.)

The transformation of QS from tool of liberation to tool of control follows a well-established pattern in the recent history of networked computers. Back in the mainframe age, computers were essentially control mechanisms, aimed at monitoring and enforcing rules on people and processes. In the PC era, computers also came to be used to liberate people, freeing them from corporate oversight and control. The tension between central control and personal liberation continues to define the application of computer power. We originally thought that the internet would tilt the balance further away from control and toward liberation. That now seems to be a misjudgment. By extending the collection of data to intimate spheres of personal activity and then centralizing the storage and processing of that data, the net actually seems to be shifting the balance back toward the control function. The system takes precedence.