Vocabulary is rarely so rich, so dense with branch and twig, as in the realm of flora and fauna. Plants and animals go by all sorts of strange and evocative names depending on where you are and whom you’re talking with. One local term for the kestrel, reports Robert Macfarlane in an article in Orion, is wind-fucker. Having learned the word, he writes, “it is hard now not to see in the pose of the hovering kestrel a certain lustful quiver.”

I’m reminded of Seamus Heaney’s translation of a Middle English poem, “The Names of the Hare”:

Beat-the-pad, white-face,
funk-the-ditch, shit-ass.

The wimount, the messer,
the skidaddler, the nibbler,
the ill-met, the slabber.

The quick-scut, the dew-flirt,
the grass-biter, the goibert,
the home-late, the do-the-dirt.

It goes on that way for a couple dozen more lines, each of which brings you a little closer to the nature of the beastie.

Macfarlane’s piece, drawn from his forthcoming book Landmarks, was inspired by the discovery that a great dictionary for kids, the Oxford Junior Dictionary, is being pruned of words describing the stuff of the natural world. Being inserted in their place are words describing the abstractions and symbols of the digital and bureaucratic spheres:

Under pressure, Oxford University Press revealed a list of the entries it no longer felt to be relevant to a modern-day childhood. The deletions included acorn, adder, ash, beech, bluebell, buttercup, catkin, conker, cowslip, cygnet, dandelion, fern, hazel, heather, heron, ivy, kingfisher, lark, mistletoe, nectar, newt, otter, pasture, and willow. The words introduced to the new edition included attachment, block-graph, blog, broadband, bullet-point, celebrity, chatroom, committee, cut-and-paste, MP3 player, and voice-mail.

They yanked out bluebell and put in bullet-point? What shit-asses.

The substitutions made in the dictionary — the outdoor and the natural being displaced by the indoor and the virtual — are a small but significant symptom of the simulated life we increasingly live. Children are now (and valuably) adept ecologists of the technoscape, with numerous terms for file types but few for different trees and creatures. A basic literacy of landscape is falling away up and down the ages.

As Macfarlane goes on to say, the changes in the dictionary don’t just testify to our weakening grasp on nature. Something else is being lost: “a kind of word magic, the power that certain terms possess to enchant our relations with nature and place.”

As the writer Henry Porter observed, the OUP deletions removed the “euphonious vocabulary of the natural world — words which do not simply label an object or action but in some mysterious and beautiful way become part of it.”

I’m sure that many will label Macfarlane and Porter “romantics.” I’ve begun to notice that romantic is replacing Luddite and nostalgist as the insult-of-choice deployed by techno-apologists to dismiss anyone with more expansive interests than their own. That, too, is telling. It’s always been a sin against progress to look backward. Now it’s also a sin against progress to look inward. And so, fading from sight and imagination alike, the world becomes ever vaguer to us — not mysterious but peripheral, its things unworthy even of being named. Who now would think of the wind as something that might be fucked?

Photo: Rick Cameron.

Smart-tech smackdown


Forget Mayweather and Pacquiao. Tonight in New York, Intelligence Squared is hosting a debate on the proposition “Smart Technology Is Making Us Dumb.” Arguing for the proposition will be Andrew Keen and I. Arguing against it will be Genevieve Bell and David Weinberger. The event is sold out, but you can watch the video here.

Theses in tweetform (fourth series)


[first series, 2012]

1. The complexity of the medium is inversely proportional to the eloquence of the message.

2. Hypertext is a more conservative medium than text.

3. The best medium for the nonlinear narrative is the linear page.

4. Twitter is a more ruminative medium than Facebook.

5. The introduction of digital tools has never improved the quality of an art form.

6. The returns on interactivity quickly turn negative.

7. In the material world, doing is knowing; in media, the opposite is often true.

8. Facebook’s profitability is directly tied to the shallowness of its members: hence its strategy.

9. Increasing the intelligence of a network tends to decrease the intelligence of those connected to it.

10. The one new art form spawned by the computer – the videogame – is the computer’s prisoner.

11. Personal correspondence grows less interesting as the speed of its delivery quickens.

12. Programmers are the unacknowledged legislators of the world.

13. The album cover turned out to be indispensable to popular music.

14. The pursuit of followers on Twitter is an occupation of the bourgeoisie.

15. Abundance of information breeds delusions of knowledge among the unwary.

16. No great work of literature could have been written in hypertext.

17. The philistine appears ideally suited to the role of cultural impresario online.

18. Television became more interesting when people started paying for it.

19. Instagram shows us what a world without art looks like.

20. Online conversation is to oral conversation as a mask is to a face.

[second series, 2013]

21. Recommendation engines are the best cure for hubris.

22. Vines would be better if they were one second shorter.

23. Hell is other selfies.

24. Twitter has revealed that brevity and verbosity are not always antonyms.

25. Personalized ads provide a running critique of artificial intelligence.

26. Who you are is what you do between notifications.

27. Online is to offline as a swimming pool is to a pond.

28. People in love leave the sparsest data trails.

29.  YouTube fan videos are the living fossils of the original web.

30. Mark Zuckerberg is the Grigory Potemkin of our time.

[third series, 2014]

31. Every point on the internet is a center of the internet.

32. On Twitter, one’s sense of solipsism intensifies as one’s follower count grows.

33. A thing contains infinitely more information than its image.

34. A book has many pages; an ebook has one page.

35. If a hard drive is a soul, the cloud is the oversoul.

36. A self-driving car is a contradiction in terms.

37. The essence of an event is the ghost in the recording.

38. A Snapchat message becomes legible as it vanishes.

39. When we turn on a GPS system, we become cargo.

40. Google searches us.

[fourth series]

41. Tools extend us; technology confines us.

42. People take facts as metaphors; computers take metaphors as facts.

43. We need not fear robots until robots fear us.

44. Programmers are ethicists in denial.

45. The dream of frictionlessness is a death wish.

46. A car without a steering wheel is comic; a car without a rearview mirror is tragic.

47. One feels lightest after one clears one’s browser cache.

48. The things of the world manifest themselves as either presence or absence.

49. Memory is the medium of absence; time is the medium of presence.

50. A bird resembles us most when it flies into a window.

Image: Sam-Cat.

What do robots do?


Yesterday I posted an excerpt from the start of Paul Goodman’s 1969 NYRB essay “Can Technology Be Humane?” Here’s another bit, equally relevant to our current situation, from later in the piece, when Goodman turns his attention to automation, robots, and what we today call “big data”:

In automating there is an analogous dilemma of how to cope with masses of people and get economies of scale, without losing the individual at great consequent human and economic cost. A question of immense importance for the immediate future is, Which functions should be automated or organized to use business machines, and which should not? This question also is not getting asked, and the present disposition is that the sky is the limit for extraction, refining, manufacturing, processing, packaging, transportation, clerical work, ticketing, transactions, information retrieval, recruitment, middle management, evaluation, diagnosis, instruction, and even research and invention. Whether the machines can do all these kinds of jobs and more is partly an empirical question, but it also partly depends on what is meant by doing a job. Very often, e.g., in college admissions, machines are acquired for putative economies (which do not eventuate); but the true reason is that an overgrown and overcentralized organization cannot be administered without them. The technology conceals the essential trouble, e.g., that there is no community of scholars and students are treated like things. The function is badly performed, and finally the system breaks down anyway. I doubt that enterprises in which interpersonal relations are important are suited to much programming.

But worse, what can happen is that the real function of the enterprise is subtly altered so that it is suitable for the mechanical system. (E.g., “information retrieval” is taken as an adequate replacement for critical scholarship.) Incommensurable factors, individual differences, the local context, the weighting of evidence are quietly overlooked though they may be of the essence. The system, with its subtly transformed purposes, seems to run very smoothly; it is productive, and it is more and more out of line with the nature of things and the real problems. Meantime it is geared in with other enterprises of society e.g., major public policy may depend on welfare or unemployment statistics which, as they are tabulated, are blind to the actual lives of poor families. In such a case, the particular system may not break down, the whole society may explode.

I need hardly point out that American society is peculiarly liable to the corruption of inauthenticity, busily producing phony products. It lives by public relations, abstract ideals, front politics, show-business communications, mandarin credentials. It is preeminently overtechnologized. And computer technologists especially suffer the euphoria of being in a new and rapidly expanding field. It is so astonishing that the robot can do the job at all or seem to do it, that it is easy to blink at the fact that he is doing it badly or isn’t really doing quite that job.

Goodman here makes a crucial point that still gets overlooked in discussions of automation. Computers and people work in different ways. When any task is shifted from a person to a computer, therefore, the task changes in order to be made suitable for the computer. As the process of automation continues, the context in which the task is performed also changes, in order to be made amenable to automation. The enterprise changes, the school changes, the hospital changes, the household changes, the economy changes, the society changes. The temptation, all along the way, is to look to the computer to provide the measures by which we evaluate those changes, which ends up concealing rather than revealing the true and full nature of the changes. Goodman expresses the danger succinctly: “The system, with its subtly transformed purposes, seems to run very smoothly; it is productive, and it is more and more out of line with the nature of things and the real problems.”

Image: cutetape.

The prudent technologist


Paul Goodman, 1969:

Whether or not it draws on new scientific research, technology is a branch of moral philosophy, not of science. It aims at prudent goods for the commonweal and to provide efficient means for these goods. At present, however, “scientific technology” occupies a bastard position in the universities, in funding, and in the public mind. It is half tied to the theoretical sciences and half treated as mere know-how for political and commercial purposes. It has no principles of its own. To remedy this—so Karl Jaspers in Europe and Robert Hutchins in America have urged—technology must have its proper place on the faculty as a learned profession important in modern society, along with medicine, law, the humanities, and natural philosophy, learning from them and having something to teach them. As a moral philosopher, a technician should be able to criticize the programs given him to implement. As a professional in a community of learned professionals, a technologist must have a different kind of training and develop a different character than we see at present among technicians and engineers. He should know something of the social sciences, law, the fine arts, and medicine, as well as relevant natural sciences.

Prudence is foresight, caution, utility. Thus it is up to the technologists, not to regulatory agencies of the government, to provide for safety and to think about remote effects. This is what Ralph Nader is saying and Rachel Carson used to ask. An important aspect of caution is flexibility, to avoid the pyramiding catastrophe that occurs when something goes wrong in interlocking technologies, as in urban power failures. Naturally, to take responsibility for such things often requires standing up to the front office and urban politicians, and technologists must organize themselves in order to have power to do it.



Alan Jacobs:

Digital textuality offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.

Recent technologies enable a renewal of commentary, but struggle to overcome a post-Romantic belief that commentary is belated, derivative. …

If our textual technologies promote commentary but we resist it, we will achieve a Pyrrhic victory over our technologies.

Andrew Piper:

The main difference between our moment and the lost world of pre-modern commentary that Jacobs invokes is of course a material one. In a context of hand-written documents, transcription was the primary activity that consumed most individuals’ time. Transcription preceded, but also informed commentary (as practiced by the medieval Arab translator Joannitius). Who would be flippant when it had just taken weeks to copy something out? The submission that Jacobs highlights as a prerequisite of good commentary — a privileging of someone else’s point of view over our own — was a product of corporeal labor. Our bodies shaped our minds’ eye.

It’s interesting that Jacobs and Piper offer different explanations for the diminished role of textual commentary in intellectual life. Jacobs traces it to a shift in cultural attitudes, particularly our recent, post-Romantic embrace of self-expression and originality at the expense of humility and receptiveness. Tacitly, he also implicates the even more recent, post-modern belief that the written word is something to be approached with suspicion rather than respect. For Piper, the reason lies in an earlier shift in media technology: when the printing press and other tools for the mechanical reproduction of text removed the need for manual transcription, they also reduced the depth of response, and the humbleness, that transcription promoted. “Who would be flippant when it had just taken weeks to copy something out?” These explanations are not mutually exclusive, of course, and the tension between them seems apt, as both Jacobs and Piper seek to explore the intersection of, on the one hand, reading and writing technologies and, on the other, cultural attitudes toward reading and writing.

While the presentation of text on shared computer networks does open up a vast territory for comment, what Jacobs terms “digital textuality” is hardly promoting the kind of self-effacing commentary he yearns for. The two essential innovations of computerized writing and reading — the word processor’s cut-and-paste function and the hypertext of the web — make text malleable and provisional. Presented on a computer, the written work is no longer an artifact to be contemplated and pondered but rather raw material to be worked over by the creative I — not a sculpture but a gob of clay. Reading becomes a means of re-writing. Textual technologies make text submissive and subservient to the reader, not the other way around. They encourage, toward the text, not the posture of the monk but the posture of the graffiti artist. Is it any wonder that most online comments feel as though they were written in spray paint?

I’m exaggerating, a bit. It’s possible to sketch out an alternative history of the net in which thoughtful reading and commentary play a bigger role. In its original form, the blog, or web log, was more a reader’s medium than a writer’s medium. And one can, without too much work, find deeply considered comment threads spinning out from online writings. But the blog turned into a writer’s medium, and readerly comments remain the exception, as both Jacobs and Piper agree. One of the dreams for the web, expressed through a computer metaphor, was that it would be a “read-write” medium rather than a “read-only” medium. In reality, the web is more of a write-only medium, with the desire for self-expression largely subsuming the act of reading. So I’m doubtful about Jacobs’s suggestion that the potential of our new textual technologies is being frustrated by our cultural tendencies. The technologies and the culture seem of a piece. We’re not resisting the tools; we’re using them as they were designed to be used.

Could this change? Maybe. “Not all is lost today,” writes Piper. “While comment threads seethe, there is also a vibrant movement afoot to remake the web as a massive space of commentary. The annotated web, as it’s called, has the aim of transforming our writing spaces from linked planes to layered marginalia.” But this, too, is an old dream. I remember a lot of excitement (and trepidation) about the “annotated web” at the end of the nineties. Browser plug-ins like Third Voice created an annotation layer on top of all web pages. If you had the plug-in installed, you could write your own comments on any page you visited, as well as read the comments written by others. But the attempt to create an annotated web failed. And it wasn’t just because the early adopters were spammers and trolls (though they were). Nor was it because corporate web publishers resisted the attempt to open their properties to outside commentary (though they did). What killed the annotated web was a lack of interest. Few could be bothered to download and install the plug-in. As Wired noted in a 2001 obituary for Third Voice, “with only a couple hundred thousand users at last count, Third Voice was never the killer app it promised to be. But its passage was a silent testament to the early idealism of the Web, and how the ubiquitous ad model killed it.”

It’s possible that new attempts to build an annotation layer will succeed where the earlier ones failed. Piper points in particular to And it’s also possible that a narrower application of an annotation layer, one designed specifically for scholarship, will arise. But I’m not holding my breath. I think Piper is correct in arguing that the real challenge is not creating a technology for annotation but re-creating a culture in which careful reading and commentary are as valued as self-expression: “It’s all well and good to say commentary is back. It’s another to truly re-imagine how a second grader or college student learns to write. What if we taught commentary instead of expression, not just for beginning writers, but right on through university and the PhD?” Piper may disagree, but that strikes me as a fundamentally anti-digital idea. If “a privileging of someone else’s point of view over our own” requires, as Piper writes, the submissiveness that comes from “corporeal labor,” then what is necessary above all is the re-embodiment of text.

Image of woodblock prepared for printing: Wikipedia.