Monthly Archives: April 2015

Theses in tweetform (fourth series)

sprout

[first series, 2012]

1. The complexity of the medium is inversely proportional to the eloquence of the message.

2. Hypertext is a more conservative medium than text.

3. The best medium for the nonlinear narrative is the linear page.

4. Twitter is a more ruminative medium than Facebook.

5. The introduction of digital tools has never improved the quality of an art form.

6. The returns on interactivity quickly turn negative.

7. In the material world, doing is knowing; in media, the opposite is often true.

8. Facebook’s profitability is directly tied to the shallowness of its members: hence its strategy.

9. Increasing the intelligence of a network tends to decrease the intelligence of those connected to it.

10. The one new art form spawned by the computer – the videogame – is the computer’s prisoner.

11. Personal correspondence grows less interesting as the speed of its delivery quickens.

12. Programmers are the unacknowledged legislators of the world.

13. The album cover turned out to be indispensable to popular music.

14. The pursuit of followers on Twitter is an occupation of the bourgeoisie.

15. Abundance of information breeds delusions of knowledge among the unwary.

16. No great work of literature could have been written in hypertext.

17. The philistine appears ideally suited to the role of cultural impresario online.

18. Television became more interesting when people started paying for it.

19. Instagram shows us what a world without art looks like.

20. Online conversation is to oral conversation as a mask is to a face.

[second series, 2013]

21. Recommendation engines are the best cure for hubris.

22. Vines would be better if they were one second shorter.

23. Hell is other selfies.

24. Twitter has revealed that brevity and verbosity are not always antonyms.

25. Personalized ads provide a running critique of artificial intelligence.

26. Who you are is what you do between notifications.

27. Online is to offline as a swimming pool is to a pond.

28. People in love leave the sparsest data trails.

29.  YouTube fan videos are the living fossils of the original web.

30. Mark Zuckerberg is the Grigory Potemkin of our time.

[third series, 2014]

31. Every point on the internet is a center of the internet.

32. On Twitter, one’s sense of solipsism intensifies as one’s follower count grows.

33. A thing contains infinitely more information than its image.

34. A book has many pages; an ebook has one page.

35. If a hard drive is a soul, the cloud is the oversoul.

36. A self-driving car is a contradiction in terms.

37. The essence of an event is the ghost in the recording.

38. A Snapchat message becomes legible as it vanishes.

39. When we turn on a GPS system, we become cargo.

40. Google searches us.

[fourth series]

41. Tools extend us; technology confines us.

42. People take facts as metaphors; computers take metaphors as facts.

43. We need not fear robots until robots fear us.

44. Programmers are ethicists in denial.

45. The dream of frictionlessness is a death wish.

46. A car without a steering wheel is comic; a car without a rearview mirror is tragic.

47. One feels lightest after one clears one’s browser cache.

48. The things of the world manifest themselves as either presence or absence.

49. Memory is the medium of absence; time is the medium of presence.

50. A bird resembles us most when it flies into a window.

Image: Sam-Cat.

What do robots do?

stamps

Yesterday I posted an excerpt from the start of Paul Goodman’s 1969 NYRB essay “Can Technology Be Humane?” Here’s another bit, equally relevant to our current situation, from later in the piece, when Goodman turns his attention to automation, robots, and what we today call “big data”:

In automating there is an analogous dilemma of how to cope with masses of people and get economies of scale, without losing the individual at great consequent human and economic cost. A question of immense importance for the immediate future is, Which functions should be automated or organized to use business machines, and which should not? This question also is not getting asked, and the present disposition is that the sky is the limit for extraction, refining, manufacturing, processing, packaging, transportation, clerical work, ticketing, transactions, information retrieval, recruitment, middle management, evaluation, diagnosis, instruction, and even research and invention. Whether the machines can do all these kinds of jobs and more is partly an empirical question, but it also partly depends on what is meant by doing a job. Very often, e.g., in college admissions, machines are acquired for putative economies (which do not eventuate); but the true reason is that an overgrown and overcentralized organization cannot be administered without them. The technology conceals the essential trouble, e.g., that there is no community of scholars and students are treated like things. The function is badly performed, and finally the system breaks down anyway. I doubt that enterprises in which interpersonal relations are important are suited to much programming.

But worse, what can happen is that the real function of the enterprise is subtly altered so that it is suitable for the mechanical system. (E.g., “information retrieval” is taken as an adequate replacement for critical scholarship.) Incommensurable factors, individual differences, the local context, the weighting of evidence are quietly overlooked though they may be of the essence. The system, with its subtly transformed purposes, seems to run very smoothly; it is productive, and it is more and more out of line with the nature of things and the real problems. Meantime it is geared in with other enterprises of society e.g., major public policy may depend on welfare or unemployment statistics which, as they are tabulated, are blind to the actual lives of poor families. In such a case, the particular system may not break down, the whole society may explode.

I need hardly point out that American society is peculiarly liable to the corruption of inauthenticity, busily producing phony products. It lives by public relations, abstract ideals, front politics, show-business communications, mandarin credentials. It is preeminently overtechnologized. And computer technologists especially suffer the euphoria of being in a new and rapidly expanding field. It is so astonishing that the robot can do the job at all or seem to do it, that it is easy to blink at the fact that he is doing it badly or isn’t really doing quite that job.

Goodman here makes a crucial point that still gets overlooked in discussions of automation. Computers and people work in different ways. When any task is shifted from a person to a computer, therefore, the task changes in order to be made suitable for the computer. As the process of automation continues, the context in which the task is performed also changes, in order to be made amenable to automation. The enterprise changes, the school changes, the hospital changes, the household changes, the economy changes, the society changes. The temptation, all along the way, is to look to the computer to provide the measures by which we evaluate those changes, which ends up concealing rather than revealing the true and full nature of the changes. Goodman expresses the danger succinctly: “The system, with its subtly transformed purposes, seems to run very smoothly; it is productive, and it is more and more out of line with the nature of things and the real problems.”

Image: cutetape.

The prudent technologist

erector

Paul Goodman, 1969:

Whether or not it draws on new scientific research, technology is a branch of moral philosophy, not of science. It aims at prudent goods for the commonweal and to provide efficient means for these goods. At present, however, “scientific technology” occupies a bastard position in the universities, in funding, and in the public mind. It is half tied to the theoretical sciences and half treated as mere know-how for political and commercial purposes. It has no principles of its own. To remedy this—so Karl Jaspers in Europe and Robert Hutchins in America have urged—technology must have its proper place on the faculty as a learned profession important in modern society, along with medicine, law, the humanities, and natural philosophy, learning from them and having something to teach them. As a moral philosopher, a technician should be able to criticize the programs given him to implement. As a professional in a community of learned professionals, a technologist must have a different kind of training and develop a different character than we see at present among technicians and engineers. He should know something of the social sciences, law, the fine arts, and medicine, as well as relevant natural sciences.

Prudence is foresight, caution, utility. Thus it is up to the technologists, not to regulatory agencies of the government, to provide for safety and to think about remote effects. This is what Ralph Nader is saying and Rachel Carson used to ask. An important aspect of caution is flexibility, to avoid the pyramiding catastrophe that occurs when something goes wrong in interlocking technologies, as in urban power failures. Naturally, to take responsibility for such things often requires standing up to the front office and urban politicians, and technologists must organize themselves in order to have power to do it.

Write-only

woodblock

Alan Jacobs:

Digital textuality offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.

Recent technologies enable a renewal of commentary, but struggle to overcome a post-Romantic belief that commentary is belated, derivative. …

If our textual technologies promote commentary but we resist it, we will achieve a Pyrrhic victory over our technologies.

Andrew Piper:

The main difference between our moment and the lost world of pre-modern commentary that Jacobs invokes is of course a material one. In a context of hand-written documents, transcription was the primary activity that consumed most individuals’ time. Transcription preceded, but also informed commentary (as practiced by the medieval Arab translator Joannitius). Who would be flippant when it had just taken weeks to copy something out? The submission that Jacobs highlights as a prerequisite of good commentary — a privileging of someone else’s point of view over our own — was a product of corporeal labor. Our bodies shaped our minds’ eye.

It’s interesting that Jacobs and Piper offer different explanations for the diminished role of textual commentary in intellectual life. Jacobs traces it to a shift in cultural attitudes, particularly our recent, post-Romantic embrace of self-expression and originality at the expense of humility and receptiveness. Tacitly, he also implicates the even more recent, post-modern belief that the written word is something to be approached with suspicion rather than respect. For Piper, the reason lies in an earlier shift in media technology: when the printing press and other tools for the mechanical reproduction of text removed the need for manual transcription, they also reduced the depth of response, and the humbleness, that transcription promoted. “Who would be flippant when it had just taken weeks to copy something out?” These explanations are not mutually exclusive, of course, and the tension between them seems apt, as both Jacobs and Piper seek to explore the intersection of, on the one hand, reading and writing technologies and, on the other, cultural attitudes toward reading and writing.

While the presentation of text on shared computer networks does open up a vast territory for comment, what Jacobs terms “digital textuality” is hardly promoting the kind of self-effacing commentary he yearns for. The two essential innovations of computerized writing and reading — the word processor’s cut-and-paste function and the hypertext of the web — make text malleable and provisional. Presented on a computer, the written work is no longer an artifact to be contemplated and pondered but rather raw material to be worked over by the creative I — not a sculpture but a gob of clay. Reading becomes a means of re-writing. Textual technologies make text submissive and subservient to the reader, not the other way around. They encourage, toward the text, not the posture of the monk but the posture of the graffiti artist. Is it any wonder that most online comments feel as though they were written in spray paint?

I’m exaggerating, a bit. It’s possible to sketch out an alternative history of the net in which thoughtful reading and commentary play a bigger role. In its original form, the blog, or web log, was more a reader’s medium than a writer’s medium. And one can, without too much work, find deeply considered comment threads spinning out from online writings. But the blog turned into a writer’s medium, and readerly comments remain the exception, as both Jacobs and Piper agree. One of the dreams for the web, expressed through a computer metaphor, was that it would be a “read-write” medium rather than a “read-only” medium. In reality, the web is more of a write-only medium, with the desire for self-expression largely subsuming the act of reading. So I’m doubtful about Jacobs’s suggestion that the potential of our new textual technologies is being frustrated by our cultural tendencies. The technologies and the culture seem of a piece. We’re not resisting the tools; we’re using them as they were designed to be used.

Could this change? Maybe. “Not all is lost today,” writes Piper. “While comment threads seethe, there is also a vibrant movement afoot to remake the web as a massive space of commentary. The annotated web, as it’s called, has the aim of transforming our writing spaces from linked planes to layered marginalia.” But this, too, is an old dream. I remember a lot of excitement (and trepidation) about the “annotated web” at the end of the nineties. Browser plug-ins like Third Voice created an annotation layer on top of all web pages. If you had the plug-in installed, you could write your own comments on any page you visited, as well as read the comments written by others. But the attempt to create an annotated web failed. And it wasn’t just because the early adopters were spammers and trolls (though they were). Nor was it because corporate web publishers resisted the attempt to open their properties to outside commentary (though they did). What killed the annotated web was a lack of interest. Few could be bothered to download and install the plug-in. As Wired noted in a 2001 obituary for Third Voice, “with only a couple hundred thousand users at last count, Third Voice was never the killer app it promised to be. But its passage was a silent testament to the early idealism of the Web, and how the ubiquitous ad model killed it.”

It’s possible that new attempts to build an annotation layer will succeed where the earlier ones failed. Piper points in particular to Hypothes.is. And it’s also possible that a narrower application of an annotation layer, one designed specifically for scholarship, will arise. But I’m not holding my breath. I think Piper is correct in arguing that the real challenge is not creating a technology for annotation but re-creating a culture in which careful reading and commentary are as valued as self-expression: “It’s all well and good to say commentary is back. It’s another to truly re-imagine how a second grader or college student learns to write. What if we taught commentary instead of expression, not just for beginning writers, but right on through university and the PhD?” Piper may disagree, but that strikes me as a fundamentally anti-digital idea. If “a privileging of someone else’s point of view over our own” requires, as Piper writes, the submissiveness that comes from “corporeal labor,” then what is necessary above all is the re-embodiment of text.

Image of woodblock prepared for printing: Wikipedia.

The illusion of knowledge

escher

This post, along with seventy-eight others, is collected in the book Utopia Is Creepy.

The internet may be making us shallow, but it’s making us think we’re deep.

A newly published study, by three Yale psychologists, shows that searching the web gives people an “illusion of knowledge.” They start to confuse what’s online with what’s in their head, which gives them an exaggerated sense of their own intelligence. The effect isn’t limited to the particular subject areas that people explore on the web. It’s more general than that. Doing searches on one topic inflates people’s sense of how well they understand other, unrelated topics. As the researchers explain:

One’s self-assessed ability to answer questions increased after searching for explanations online in a previous, unrelated task, an effect that held even after controlling for time, content, and features of the search process. The effect derives from a true misattribution of the sources of knowledge, not a change in understanding of what counts as internal knowledge and is not driven by a “halo effect” or general overconfidence. We provide evidence that this effect occurs specifically because information online can so easily be accessed through search.

The researchers, Matthew Fisher, Mariel Goddu, and Frank Keil, documented the effect, and its cause, through nine experiments. They divided test subjects into two groups. One group spent time searching the web, the other group stayed offline, and then both groups estimated, in a variety of ways, their understanding of various topics. The experiments consistently showed that searching the web gives people an exaggerated sense of their own knowledge.

To make sure that searchers’ overconfidence in assessing their smarts stemmed from a misperception about the depth of knowledge in their own heads (rather than reflecting a confidence in their ability to Google the necessary information), the psychologists, in one of the experiments, had the test subjects make estimates of their brain activity:

Instead of asking participants to rate how well they could answer questions about topics using a Likert scale ranging from 1 (very poorly) to 7 (very well), participants were shown a scale consisting of seven functional MRI (fMRI) images of varying levels of activation, as illustrated by colored regions of increasing size. Participants were told, “Scientists have shown that increased activity in certain brain regions corresponds with higher quality explanations.” This dependent variable was designed to unambiguously emphasize one’s brain as the location of personally held knowledge. Participants were then asked to select the image that would correspond with their brain activity when they answered the self-assessed knowledge questions.

The subjects who searched the net before the task rated their anticipated brain activity as being significantly stronger than did the control group who hadn’t been looking up information online.

Similar misperceptions may be produced by consulting other external, or “transactive,” sources of knowledge, the researchers note, but the illusion is probably much stronger with the web, given its unprecedented scope and accessibility:

This illusion of knowledge might well be found for sources other than the Internet: for example, an expert librarian may experience a similar illusion when accessing a reference Rolodex. … While such effects may be possible, the rise of the Internet has surely broadened the scope of this effect. Before the Internet, there was no similarly massive, external knowledge database. People relied on less immediate and accessible inanimate stores of external knowledge, such as books—or, they relied on other minds in transactive memory systems. In contrast with other sources and cognitive tools for informational access, the Internet is nearly always accessible, can be searched efficiently, and provides immediate feedback. For these reasons, the Internet might become even more easily integrated with the human mind than other external sources of knowledge and perhaps even more so than human transactive memory partners, promoting much stronger illusions of knowledge.

This is just one study, but it comes on the heels of a series of other studies on how access to the web and search engines is influencing the way our minds construct, or don’t construct, personal knowledge. A 2011 Columbia study found that the ready availability of online information reduces people’s retention of facts: “when people expect to have future access to [online] information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it,” a phenomenon which indicates “that processes of human memory are adapting to the advent of new computing and communication technology.” A 2014 Fairfield University study found that simply taking digital photographs of an experience will tend to reduce your memory of the experience. The University of Colorado’s Adrian Ward has found evidence that the shift from “biological information storage” toward “digital information storage” may “have large-scale and long-term effects on the way people remember and process information.” He says that the internet “may act as a ‘supernormal stimulus,’ hijacking preexisting cognitive tendencies and creating novel outcomes.”

In “How Google Is Changing Your Brain,” a 2013 Scientific American article written with the late Daniel Wegner, Ward reported on experiments revealing that

using Google gives people the sense that the Internet has become part of their own cognitive tool set. A search result was recalled not as a date or name lifted from a Web page but as a product of what resided inside the study participants’ own memories, allowing them to effectively take credit for knowing things that were a product of Google’s search algorithms. The psychological impact of splitting our memories equally between the Internet and the brain’s gray matter points to a lingering irony. The advent of the “information age” seems to have created a generation of people who feel they know more than ever before—when their reliance on the Internet means that they may know ever less about the world around them.

Ignorance is bliss, particularly when it’s mistaken for knowledge.

Image: detail of M. C. Escher’s “Man with Cuboid.”

The robot pharmacist

robot rx

If you want to understand the complexities and pitfalls of automating medicine (and professional work in general), please read Bob Wachter’s story, adapted from his new book The Digital Doctor, of how Pablo Garcia, a 16-year-old patient at the University of California’s San Francisco Medical Center, came to be given a dose of 38 ½ antibiotic pills rather than the single pill he should have been given. (Part 1, part 2, part 3; part 4 will appear tomorrow.) Pretty much every problem with computer automation that I write about in The Glass Cage — automation complacency, automation bias, alert fatigue, overcomplexity, distraction, miscommunication, workload spikes, etc. — is on display in the chain of events that Wachter, himself a physician, describes.

It’s a complicated story, with many players and many moving parts, but I’ll just highlight one crucial episode. After the erroneous drug order enters the hospital’s computerized prescription system, the result of (among other things) a poorly designed software template, the order is transmitted to the hospital’s pill-packaging robot. Whereas a pharmacist or a pharmacy technician would almost certainly have noticed that something was amiss with the order, the robot dutifully packages up the 38 ½ pills as a single dose without a second’s hesitation:

The robot, installed in 2010 at a cost of $7 million, is programmed to pull medications off stocked shelves; to insert the pills into shrink-wrapped, bar-coded packages; to bind these packages together with little plastic rings; and then to send them by van to locked cabinets on the patient floors. “It gives us the first important step in eliminating the potential for human error,” said UCSF Medical Center CEO Mark Laret when the robot was introduced.

Like most robots, UCSF’s can work around the clock, never needing a break and never succumbing to a distraction.

In the blink of an eye, the order for Pablo Garcia’s Septra tablets zipped from the hospital’s computer to the robot, which dutifully collected the 38 ½ Septra tablets, placed them on a half-dozen rings, and sent them to Pablo’s floor, where they came to rest in a small bin waiting for the nurse to administer them at the appointed time. “If the order goes to the robot, the techs just sort it by location and put it in a bin, and that’s it,” [hospital pharmacist] Chan told me. “They eliminated the step of the pharmacist checking on the robot, because the idea is you’re paying so much money because it’s so accurate.”

Far from eliminating human error, the replacement of an experienced professional with a robot ensured that a major error went unnoticed. Indeed, by giving the mistaken dose the imprimatur of a computer, in the form of an official, sealed, bar-coded package, the robot pretty much guaranteed that the dispensing nurse, falling victim to automation bias, would reject her own doubts and give the child all the pills.

The problems with handwritten prescriptions — it’s all too easy to misinterpret doctors’ scribbles, sometimes to fatal effect — are legendary. But solving that very real problem with layers of computers, software templates, and robots introduces a whole new set of problems, most of which are never foreseen by the system’s designers. As is often the case in automating complex processes, the computers and their human partners end up working at cross-purposes, each operating under a different set of assumptions. Wachter explains:

As Pablo Garcia’s case illustrates, many of the new holes in the Swiss cheese weren’t caused by the computer doing something wrong, per se. They were caused by the complex, and under-appreciated, challenges that can arise when real humans — busy, stressed humans with all of our cognitive biases — come up against new technologies that alter the work in subtle ways that can create new hazards.

The lesson isn’t that computers and robots don’t have an important role to play in medicine. The lesson is that automated systems are also human systems. They work best when designed with a painstaking attentiveness to the skills and foibles of human beings. When people, particularly skilled, experienced professionals, are pushed to the sidelines, in the blind pursuit of efficiency, bad things happen.

Pablo Garcia survived the overdose, though not without a struggle.