What do robots do?


Yesterday I posted an excerpt from the start of Paul Goodman’s 1969 NYRB essay “Can Technology Be Humane?” Here’s another bit, equally relevant to our current situation, from later in the piece, when Goodman turns his attention to automation, robots, and what we today call “big data”:

In automating there is an analogous dilemma of how to cope with masses of people and get economies of scale, without losing the individual at great consequent human and economic cost. A question of immense importance for the immediate future is, Which functions should be automated or organized to use business machines, and which should not? This question also is not getting asked, and the present disposition is that the sky is the limit for extraction, refining, manufacturing, processing, packaging, transportation, clerical work, ticketing, transactions, information retrieval, recruitment, middle management, evaluation, diagnosis, instruction, and even research and invention. Whether the machines can do all these kinds of jobs and more is partly an empirical question, but it also partly depends on what is meant by doing a job. Very often, e.g., in college admissions, machines are acquired for putative economies (which do not eventuate); but the true reason is that an overgrown and overcentralized organization cannot be administered without them. The technology conceals the essential trouble, e.g., that there is no community of scholars and students are treated like things. The function is badly performed, and finally the system breaks down anyway. I doubt that enterprises in which interpersonal relations are important are suited to much programming.

But worse, what can happen is that the real function of the enterprise is subtly altered so that it is suitable for the mechanical system. (E.g., “information retrieval” is taken as an adequate replacement for critical scholarship.) Incommensurable factors, individual differences, the local context, the weighting of evidence are quietly overlooked though they may be of the essence. The system, with its subtly transformed purposes, seems to run very smoothly; it is productive, and it is more and more out of line with the nature of things and the real problems. Meantime it is geared in with other enterprises of society e.g., major public policy may depend on welfare or unemployment statistics which, as they are tabulated, are blind to the actual lives of poor families. In such a case, the particular system may not break down, the whole society may explode.

I need hardly point out that American society is peculiarly liable to the corruption of inauthenticity, busily producing phony products. It lives by public relations, abstract ideals, front politics, show-business communications, mandarin credentials. It is preeminently overtechnologized. And computer technologists especially suffer the euphoria of being in a new and rapidly expanding field. It is so astonishing that the robot can do the job at all or seem to do it, that it is easy to blink at the fact that he is doing it badly or isn’t really doing quite that job.

Goodman here makes a crucial point that still gets overlooked in discussions of automation. Computers and people work in different ways. When any task is shifted from a person to a computer, therefore, the task changes in order to be made suitable for the computer. As the process of automation continues, the context in which the task is performed also changes, in order to be made amenable to automation. The enterprise changes, the school changes, the hospital changes, the household changes, the economy changes, the society changes. The temptation, all along the way, is to look to the computer to provide the measures by which we evaluate those changes, which ends up concealing rather than revealing the true and full nature of the changes. Goodman expresses the danger succinctly: “The system, with its subtly transformed purposes, seems to run very smoothly; it is productive, and it is more and more out of line with the nature of things and the real problems.”

Image: cutetape.

The prudent technologist


Paul Goodman, 1969:

Whether or not it draws on new scientific research, technology is a branch of moral philosophy, not of science. It aims at prudent goods for the commonweal and to provide efficient means for these goods. At present, however, “scientific technology” occupies a bastard position in the universities, in funding, and in the public mind. It is half tied to the theoretical sciences and half treated as mere know-how for political and commercial purposes. It has no principles of its own. To remedy this—so Karl Jaspers in Europe and Robert Hutchins in America have urged—technology must have its proper place on the faculty as a learned profession important in modern society, along with medicine, law, the humanities, and natural philosophy, learning from them and having something to teach them. As a moral philosopher, a technician should be able to criticize the programs given him to implement. As a professional in a community of learned professionals, a technologist must have a different kind of training and develop a different character than we see at present among technicians and engineers. He should know something of the social sciences, law, the fine arts, and medicine, as well as relevant natural sciences.

Prudence is foresight, caution, utility. Thus it is up to the technologists, not to regulatory agencies of the government, to provide for safety and to think about remote effects. This is what Ralph Nader is saying and Rachel Carson used to ask. An important aspect of caution is flexibility, to avoid the pyramiding catastrophe that occurs when something goes wrong in interlocking technologies, as in urban power failures. Naturally, to take responsibility for such things often requires standing up to the front office and urban politicians, and technologists must organize themselves in order to have power to do it.



Alan Jacobs:

Digital textuality offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.

Recent technologies enable a renewal of commentary, but struggle to overcome a post-Romantic belief that commentary is belated, derivative. …

If our textual technologies promote commentary but we resist it, we will achieve a Pyrrhic victory over our technologies.

Andrew Piper:

The main difference between our moment and the lost world of pre-modern commentary that Jacobs invokes is of course a material one. In a context of hand-written documents, transcription was the primary activity that consumed most individuals’ time. Transcription preceded, but also informed commentary (as practiced by the medieval Arab translator Joannitius). Who would be flippant when it had just taken weeks to copy something out? The submission that Jacobs highlights as a prerequisite of good commentary — a privileging of someone else’s point of view over our own — was a product of corporeal labor. Our bodies shaped our minds’ eye.

It’s interesting that Jacobs and Piper offer different explanations for the diminished role of textual commentary in intellectual life. Jacobs traces it to a shift in cultural attitudes, particularly our recent, post-Romantic embrace of self-expression and originality at the expense of humility and receptiveness. Tacitly, he also implicates the even more recent, post-modern belief that the written word is something to be approached with suspicion rather than respect. For Piper, the reason lies in an earlier shift in media technology: when the printing press and other tools for the mechanical reproduction of text removed the need for manual transcription, they also reduced the depth of response, and the humbleness, that transcription promoted. “Who would be flippant when it had just taken weeks to copy something out?” These explanations are not mutually exclusive, of course, and the tension between them seems apt, as both Jacobs and Piper seek to explore the intersection of, on the one hand, reading and writing technologies and, on the other, cultural attitudes toward reading and writing.

While the presentation of text on shared computer networks does open up a vast territory for comment, what Jacobs terms “digital textuality” is hardly promoting the kind of self-effacing commentary he yearns for. The two essential innovations of computerized writing and reading — the word processor’s cut-and-paste function and the hypertext of the web — make text malleable and provisional. Presented on a computer, the written work is no longer an artifact to be contemplated and pondered but rather raw material to be worked over by the creative I — not a sculpture but a gob of clay. Reading becomes a means of re-writing. Textual technologies make text submissive and subservient to the reader, not the other way around. They encourage, toward the text, not the posture of the monk but the posture of the graffiti artist. Is it any wonder that most online comments feel as though they were written in spray paint?

I’m exaggerating, a bit. It’s possible to sketch out an alternative history of the net in which thoughtful reading and commentary play a bigger role. In its original form, the blog, or web log, was more a reader’s medium than a writer’s medium. And one can, without too much work, find deeply considered comment threads spinning out from online writings. But the blog turned into a writer’s medium, and readerly comments remain the exception, as both Jacobs and Piper agree. One of the dreams for the web, expressed through a computer metaphor, was that it would be a “read-write” medium rather than a “read-only” medium. In reality, the web is more of a write-only medium, with the desire for self-expression largely subsuming the act of reading. So I’m doubtful about Jacobs’s suggestion that the potential of our new textual technologies is being frustrated by our cultural tendencies. The technologies and the culture seem of a piece. We’re not resisting the tools; we’re using them as they were designed to be used.

Could this change? Maybe. “Not all is lost today,” writes Piper. “While comment threads seethe, there is also a vibrant movement afoot to remake the web as a massive space of commentary. The annotated web, as it’s called, has the aim of transforming our writing spaces from linked planes to layered marginalia.” But this, too, is an old dream. I remember a lot of excitement (and trepidation) about the “annotated web” at the end of the nineties. Browser plug-ins like Third Voice created an annotation layer on top of all web pages. If you had the plug-in installed, you could write your own comments on any page you visited, as well as read the comments written by others. But the attempt to create an annotated web failed. And it wasn’t just because the early adopters were spammers and trolls (though they were). Nor was it because corporate web publishers resisted the attempt to open their properties to outside commentary (though they did). What killed the annotated web was a lack of interest. Few could be bothered to download and install the plug-in. As Wired noted in a 2001 obituary for Third Voice, “with only a couple hundred thousand users at last count, Third Voice was never the killer app it promised to be. But its passage was a silent testament to the early idealism of the Web, and how the ubiquitous ad model killed it.”

It’s possible that new attempts to build an annotation layer will succeed where the earlier ones failed. Piper points in particular to Hypothes.is. And it’s also possible that a narrower application of an annotation layer, one designed specifically for scholarship, will arise. But I’m not holding my breath. I think Piper is correct in arguing that the real challenge is not creating a technology for annotation but re-creating a culture in which careful reading and commentary are as valued as self-expression: “It’s all well and good to say commentary is back. It’s another to truly re-imagine how a second grader or college student learns to write. What if we taught commentary instead of expression, not just for beginning writers, but right on through university and the PhD?” Piper may disagree, but that strikes me as a fundamentally anti-digital idea. If “a privileging of someone else’s point of view over our own” requires, as Piper writes, the submissiveness that comes from “corporeal labor,” then what is necessary above all is the re-embodiment of text.

Image of woodblock prepared for printing: Wikipedia.

The illusion of knowledge


The internet may be making us shallow, but it’s making us think we’re deep.

A newly published study, by three Yale psychologists, shows that searching the web gives people an “illusion of knowledge.” They start to confuse what’s online with what’s in their head, which gives them an exaggerated sense of their own intelligence. The effect isn’t limited to the particular subject areas that people explore on the web. It’s more general than that. Doing searches on one topic inflates people’s sense of how well they understand other, unrelated topics. As the researchers explain:

One’s self-assessed ability to answer questions increased after searching for explanations online in a previous, unrelated task, an effect that held even after controlling for time, content, and features of the search process. The effect derives from a true misattribution of the sources of knowledge, not a change in understanding of what counts as internal knowledge and is not driven by a “halo effect” or general overconfidence. We provide evidence that this effect occurs specifically because information online can so easily be accessed through search.

The researchers, Matthew Fisher, Mariel Goddu, and Frank Keil, documented the effect, and its cause, through nine experiments. They divided test subjects into two groups. One group spent time searching the web, the other group stayed offline, and then both groups estimated, in a variety of ways, their understanding of various topics. The experiments consistently showed that searching the web gives people an exaggerated sense of their own knowledge.

To make sure that searchers’ overconfidence in assessing their smarts stemmed from a misperception about the depth of knowledge in their own heads (rather than reflecting a confidence in their ability to Google the necessary information), the psychologists, in one of the experiments, had the test subjects make estimates of their brain activity:

Instead of asking participants to rate how well they could answer questions about topics using a Likert scale ranging from 1 (very poorly) to 7 (very well), participants were shown a scale consisting of seven functional MRI (fMRI) images of varying levels of activation, as illustrated by colored regions of increasing size. Participants were told, “Scientists have shown that increased activity in certain brain regions corresponds with higher quality explanations.” This dependent variable was designed to unambiguously emphasize one’s brain as the location of personally held knowledge. Participants were then asked to select the image that would correspond with their brain activity when they answered the self-assessed knowledge questions.

The subjects who searched the net before the task rated their anticipated brain activity as being significantly stronger than did the control group who hadn’t been looking up information online.

Similar misperceptions may be produced by consulting other external, or “transactive,” sources of knowledge, the researchers note, but the illusion is probably much stronger with the web, given its unprecedented scope and accessibility:

This illusion of knowledge might well be found for sources other than the Internet: for example, an expert librarian may experience a similar illusion when accessing a reference Rolodex. … While such effects may be possible, the rise of the Internet has surely broadened the scope of this effect. Before the Internet, there was no similarly massive, external knowledge database. People relied on less immediate and accessible inanimate stores of external knowledge, such as books—or, they relied on other minds in transactive memory systems. In contrast with other sources and cognitive tools for informational access, the Internet is nearly always accessible, can be searched efficiently, and provides immediate feedback. For these reasons, the Internet might become even more easily integrated with the human mind than other external sources of knowledge and perhaps even more so than human transactive memory partners, promoting much stronger illusions of knowledge.

This is just one study, but it comes on the heels of a series of other studies on how access to the web and search engines is influencing the way our minds construct, or don’t construct, personal knowledge. A 2011 Columbia study found that the ready availability of online information reduces people’s retention of facts: “when people expect to have future access to [online] information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it,” a phenomenon which indicates “that processes of human memory are adapting to the advent of new computing and communication technology.” A 2014 Fairfield University study found that simply taking digital photographs of an experience will tend to reduce your memory of the experience. The University of Colorado’s Adrian Ward has found evidence that the shift from “biological information storage” toward “digital information storage” may “have large-scale and long-term effects on the way people remember and process information.” He says that the internet “may act as a ‘supernormal stimulus,’ hijacking preexisting cognitive tendencies and creating novel outcomes.”

In “How Google Is Changing Your Brain,” a 2013 Scientific American article written with the late Daniel Wegner, Ward reported on experiments revealing that

using Google gives people the sense that the Internet has become part of their own cognitive tool set. A search result was recalled not as a date or name lifted from a Web page but as a product of what resided inside the study participants’ own memories, allowing them to effectively take credit for knowing things that were a product of Google’s search algorithms. The psychological impact of splitting our memories equally between the Internet and the brain’s gray matter points to a lingering irony. The advent of the “information age” seems to have created a generation of people who feel they know more than ever before—when their reliance on the Internet means that they may know ever less about the world around them.

Ignorance is bliss, particularly when it’s mistaken for knowledge.

Image: detail of M. C. Escher’s “Man with Cuboid.”

The robot pharmacist

robot rx

If you want to understand the complexities and pitfalls of automating medicine (and professional work in general), please read Bob Wachter’s story, adapted from his new book The Digital Doctor, of how Pablo Garcia, a 16-year-old patient at the University of California’s San Francisco Medical Center, came to be given a dose of 38 ½ antibiotic pills rather than the single pill he should have been given. (Part 1, part 2, part 3; part 4 will appear tomorrow.) Pretty much every problem with computer automation that I write about in The Glass Cage — automation complacency, automation bias, alert fatigue, overcomplexity, distraction, miscommunication, workload spikes, etc. — is on display in the chain of events that Wachter, himself a physician, describes.

It’s a complicated story, with many players and many moving parts, but I’ll just highlight one crucial episode. After the erroneous drug order enters the hospital’s computerized prescription system, the result of (among other things) a poorly designed software template, the order is transmitted to the hospital’s pill-packaging robot. Whereas a pharmacist or a pharmacy technician would almost certainly have noticed that something was amiss with the order, the robot dutifully packages up the 38 ½ pills as a single dose without a second’s hesitation:

The robot, installed in 2010 at a cost of $7 million, is programmed to pull medications off stocked shelves; to insert the pills into shrink-wrapped, bar-coded packages; to bind these packages together with little plastic rings; and then to send them by van to locked cabinets on the patient floors. “It gives us the first important step in eliminating the potential for human error,” said UCSF Medical Center CEO Mark Laret when the robot was introduced.

Like most robots, UCSF’s can work around the clock, never needing a break and never succumbing to a distraction.

In the blink of an eye, the order for Pablo Garcia’s Septra tablets zipped from the hospital’s computer to the robot, which dutifully collected the 38 ½ Septra tablets, placed them on a half-dozen rings, and sent them to Pablo’s floor, where they came to rest in a small bin waiting for the nurse to administer them at the appointed time. “If the order goes to the robot, the techs just sort it by location and put it in a bin, and that’s it,” [hospital pharmacist] Chan told me. “They eliminated the step of the pharmacist checking on the robot, because the idea is you’re paying so much money because it’s so accurate.”

Far from eliminating human error, the replacement of an experienced professional with a robot ensured that a major error went unnoticed. Indeed, by giving the mistaken dose the imprimatur of a computer, in the form of an official, sealed, bar-coded package, the robot pretty much guaranteed that the dispensing nurse, falling victim to automation bias, would reject her own doubts and give the child all the pills.

The problems with handwritten prescriptions — it’s all too easy to misinterpret doctors’ scribbles, sometimes to fatal effect — are legendary. But solving that very real problem with layers of computers, software templates, and robots introduces a whole new set of problems, most of which are never foreseen by the system’s designers. As is often the case in automating complex processes, the computers and their human partners end up working at cross-purposes, each operating under a different set of assumptions. Wachter explains:

As Pablo Garcia’s case illustrates, many of the new holes in the Swiss cheese weren’t caused by the computer doing something wrong, per se. They were caused by the complex, and under-appreciated, challenges that can arise when real humans — busy, stressed humans with all of our cognitive biases — come up against new technologies that alter the work in subtle ways that can create new hazards.

The lesson isn’t that computers and robots don’t have an important role to play in medicine. The lesson is that automated systems are also human systems. They work best when designed with a painstaking attentiveness to the skills and foibles of human beings. When people, particularly skilled, experienced professionals, are pushed to the sidelines, in the blind pursuit of efficiency, bad things happen.

Pablo Garcia survived the overdose, though not without a struggle.

Twilight of the idylls


The Silicon Valley guys have a new hobby: driving fast cars around private tracks. They love it. “When you’re really in the zone in a racecar, it’s almost meditative,” Google executive Jeff Huber tells the Times’s Farhad Manjoo. Adds Yahoo senior vice president Jeff Bonforte, “Your brain is so happy that it washes over you.” The Valley guys are a little nervous about the optics of their pastime — “Try to tone down the rich guy hobby thing,” angel investor and ex-Googler Joshua Schachter instructs Manjoo — but the “visceral thrill” of driving has nevertheless made it “the Valley’s ‘it’ hobby.”

The Valley guys are rushing to rent out racetracks and strap themselves into Ferraris at the very moment that they’re telling the rest of us how miserable driving is, and how liberated we’ll all feel when robots take the wheel. Jazzed by a Googler’s Ted Talk on driverless cars, MIT automation expert Andrew McAfee says that the Googlemobile will “free us from a largely tedious task.” Writes Wired transport reporter Alex Davies, “Liberated from the need to keep our hands on the wheel and eyes on the road, drivers will become riders with more time for working, leisure, and staying in touch with loved ones.” When Astro Teller, head of Google X, watches people drive by in their cars, all he hears is a giant sucking sound, as potentially productive minutes pour down the drain of a vast timesink. “There’s over a trillion dollars of wasted time per year we could collectively get back if we didn’t have to pay attention while the car took us from one place to another,” he said in a South by Southwest keynote this month.

Driving on a private track may be pleasantly meditative, even joy-inducing, but driving on public thoroughfares is just a drag.

What’s curious here is that the descriptions of everyday driving offered with such confidence by the avatars of driverlessness are at odds with what we know about people’s actual attitudes toward and experience of driving. People like to drive. Surveys and other research consistently show that most of us enjoy being behind the wheel. We find driving relaxing and fun and even, yes, liberating — a respite from the demands of our workaday lives. Seeing driving as a “problem” because it prevents us from being productive gets the story backwards. What’s freeing about driving is the very fact that it gives us a break from the pressure to be productive.

That doesn’t mean we’re blind to automotive miseries. When researchers talk to people about driving, they hear plenty of complaints about traffic jams and grinding commutes and bad roads and parking hassles and all the rest. Our attitudes toward driving are complex, always have been, but on balance we like to have our hands on the wheel and our eyes on the road, not to mention our foot on the gas. About 70 percent of Americans say they “like to drive,” while only about 30 percent consider it “a chore,” according to a 2006 Pew survey. A survey of millennials, released earlier this year by MTV, found that, contrary to common wisdom, most young people enjoy cars and driving, too. Seventy percent of Americans between the ages of 18 and 34 say they like to drive, and 72 percent of them say they’d rather give up texting for a week than give up their car for the same period. The percentage of people who like to drive has fallen a bit in recent years as traffic has worsened — 80 percent said they liked to drive in a 1991 Pew survey — but it’s still very high, and it belies the dreary picture of driving painted by Silicon Valley. You don’t have to be wealthy enough to buy a Porsche or to rent out a racetrack to enjoy the meditative and active pleasures of driving. They can be felt on the open road as well as the closed track.

In suggesting that driving is no more than a boring, productivity-sapping waste of time, the Valley guys are mistaking a personal bias for a universal truth. And they’re blinding themselves to the social and cultural challenges they’re going to face as they try to convince people to be passengers rather than drivers. Even if all the technical hurdles to achieving perfect vehicular automation are overcome — and despite rosy predictions, that remains a sizable if — the developers and promoters of autonomous cars are going to discover that the psychology of driving is far more complicated than they assume and far different from the psychology of being a passenger. Back in the 1970s, the public rebelled, en masse, when the federal government, for seemingly solid safety and fuel-economy reasons, imposed a national 55-mile-per-hour speed limit. The limit was repealed. If you think that everyone’s going to happily hand the steering wheel over to a robot, you’re probably delusional.

There’s something bigger going on here, and I confess that I’m still a little fuzzy about it. Silicon Valley seems to have a good deal of trouble appreciating, or even understanding, what I’ll term informal experience. It’s only when driving is formalized — removed from everyday life, transferred to a specialized facility, performed under a strict set of rules, and understood as a self-contained recreational event — that it can be conceived of as being pleasurable. When it’s not a recreational routine, when it’s performed out in the world, as part of everyday life, then driving, in the Valley view, can only be understood within the context of another formalized realm of experience: that of productive busyness. Every experience has to be cleanly defined, has to be categorized. There’s a place and a time for recreation, and there’s a place and a time for productivity.

This discomfort with the informal, with experience that is psychologically unbounded, that flits between and beyond categories, can be felt in a lot of the Valley’s consumer goods and services. Many personal apps and gadgets have the effect, or at least the intended effect, of formalizing informal activities. Once you strap on a Fitbit, you transform what might have been a pleasant walk in the park into a program of physical therapy. A passing observation that once might have earned a few fleeting smiles or shrugs before disappearing into the ether is now, thanks to the distribution systems of Facebook and Twitter, encapsulated as a product and subjected to formal measurement; every remark gets its own Nielsen rating.

What’s the source of this crabbed view of experience? I’m not sure. It may be an expression of a certain personality type. It may be a sign of the market’s continuing colonization of the quotidian. I’d guess it also has something to do with the rigorously formal qualities of programming itself. The universality of the digital computer ends — comes to a crashing halt, in fact — where informality begins.

Image: Burt and Sally mix their pleasures in “Smokey and the Bandit.”