Write-only

woodblock

Alan Jacobs:

Digital textuality offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.

Recent technologies enable a renewal of commentary, but struggle to overcome a post-Romantic belief that commentary is belated, derivative. …

If our textual technologies promote commentary but we resist it, we will achieve a Pyrrhic victory over our technologies.

Andrew Piper:

The main difference between our moment and the lost world of pre-modern commentary that Jacobs invokes is of course a material one. In a context of hand-written documents, transcription was the primary activity that consumed most individuals’ time. Transcription preceded, but also informed commentary (as practiced by the medieval Arab translator Joannitius). Who would be flippant when it had just taken weeks to copy something out? The submission that Jacobs highlights as a prerequisite of good commentary — a privileging of someone else’s point of view over our own — was a product of corporeal labor. Our bodies shaped our minds’ eye.

It’s interesting that Jacobs and Piper offer different explanations for the diminished role of textual commentary in intellectual life. Jacobs traces it to a shift in cultural attitudes, particularly our recent, post-Romantic embrace of self-expression and originality at the expense of humility and receptiveness. Tacitly, he also implicates the even more recent, post-modern belief that the written word is something to be approached with suspicion rather than respect. For Piper, the reason lies in an earlier shift in media technology: when the printing press and other tools for the mechanical reproduction of text removed the need for manual transcription, they also reduced the depth of response, and the humbleness, that transcription promoted. “Who would be flippant when it had just taken weeks to copy something out?” These explanations are not mutually exclusive, of course, and the tension between them seems apt, as both Jacobs and Piper seek to explore the intersection of, on the one hand, reading and writing technologies and, on the other, cultural attitudes toward reading and writing.

While the presentation of text on shared computer networks does open up a vast territory for comment, what Jacobs terms “digital textuality” is hardly promoting the kind of self-effacing commentary he yearns for. The two essential innovations of computerized writing and reading — the word processor’s cut-and-paste function and the hypertext of the web — make text malleable and provisional. Presented on a computer, the written work is no longer an artifact to be contemplated and pondered but rather raw material to be worked over by the creative I — not a sculpture but a gob of clay. Reading becomes a means of re-writing. Textual technologies make text submissive and subservient to the reader, not the other way around. They encourage, toward the text, not the posture of the monk but the posture of the graffiti artist. Is it any wonder that most online comments feel as though they were written in spray paint?

I’m exaggerating, a bit. It’s possible to sketch out an alternative history of the net in which thoughtful reading and commentary play a bigger role. In its original form, the blog, or web log, was more a reader’s medium than a writer’s medium. And one can, without too much work, find deeply considered comment threads spinning out from online writings. But the blog turned into a writer’s medium, and readerly comments remain the exception, as both Jacobs and Piper agree. One of the dreams for the web, expressed through a computer metaphor, was that it would be a “read-write” medium rather than a “read-only” medium. In reality, the web is more of a write-only medium, with the desire for self-expression largely subsuming the act of reading. So I’m doubtful about Jacobs’s suggestion that the potential of our new textual technologies is being frustrated by our cultural tendencies. The technologies and the culture seem of a piece. We’re not resisting the tools; we’re using them as they were designed to be used.

Could this change? Maybe. “Not all is lost today,” writes Piper. “While comment threads seethe, there is also a vibrant movement afoot to remake the web as a massive space of commentary. The annotated web, as it’s called, has the aim of transforming our writing spaces from linked planes to layered marginalia.” But this, too, is an old dream. I remember a lot of excitement (and trepidation) about the “annotated web” at the end of the nineties. Browser plug-ins like Third Voice created an annotation layer on top of all web pages. If you had the plug-in installed, you could write your own comments on any page you visited, as well as read the comments written by others. But the attempt to create an annotated web failed. And it wasn’t just because the early adopters were spammers and trolls (though they were). Nor was it because corporate web publishers resisted the attempt to open their properties to outside commentary (though they did). What killed the annotated web was a lack of interest. Few could be bothered to download and install the plug-in. As Wired noted in a 2001 obituary for Third Voice, “with only a couple hundred thousand users at last count, Third Voice was never the killer app it promised to be. But its passage was a silent testament to the early idealism of the Web, and how the ubiquitous ad model killed it.”

It’s possible that new attempts to build an annotation layer will succeed where the earlier ones failed. Piper points in particular to Hypothes.is. And it’s also possible that a narrower application of an annotation layer, one designed specifically for scholarship, will arise. But I’m not holding my breath. I think Piper is correct in arguing that the real challenge is not creating a technology for annotation but re-creating a culture in which careful reading and commentary are as valued as self-expression: “It’s all well and good to say commentary is back. It’s another to truly re-imagine how a second grader or college student learns to write. What if we taught commentary instead of expression, not just for beginning writers, but right on through university and the PhD?” Piper may disagree, but that strikes me as a fundamentally anti-digital idea. If “a privileging of someone else’s point of view over our own” requires, as Piper writes, the submissiveness that comes from “corporeal labor,” then what is necessary above all is the re-embodiment of text.

Image of woodblock prepared for printing: Wikipedia.

The illusion of knowledge

escher

This post, along with seventy-eight others, is collected in the book Utopia Is Creepy.

The internet may be making us shallow, but it’s making us think we’re deep.

A newly published study, by three Yale psychologists, shows that searching the web gives people an “illusion of knowledge.” They start to confuse what’s online with what’s in their head, which gives them an exaggerated sense of their own intelligence. The effect isn’t limited to the particular subject areas that people explore on the web. It’s more general than that. Doing searches on one topic inflates people’s sense of how well they understand other, unrelated topics. As the researchers explain:

One’s self-assessed ability to answer questions increased after searching for explanations online in a previous, unrelated task, an effect that held even after controlling for time, content, and features of the search process. The effect derives from a true misattribution of the sources of knowledge, not a change in understanding of what counts as internal knowledge and is not driven by a “halo effect” or general overconfidence. We provide evidence that this effect occurs specifically because information online can so easily be accessed through search.

The researchers, Matthew Fisher, Mariel Goddu, and Frank Keil, documented the effect, and its cause, through nine experiments. They divided test subjects into two groups. One group spent time searching the web, the other group stayed offline, and then both groups estimated, in a variety of ways, their understanding of various topics. The experiments consistently showed that searching the web gives people an exaggerated sense of their own knowledge.

To make sure that searchers’ overconfidence in assessing their smarts stemmed from a misperception about the depth of knowledge in their own heads (rather than reflecting a confidence in their ability to Google the necessary information), the psychologists, in one of the experiments, had the test subjects make estimates of their brain activity:

Instead of asking participants to rate how well they could answer questions about topics using a Likert scale ranging from 1 (very poorly) to 7 (very well), participants were shown a scale consisting of seven functional MRI (fMRI) images of varying levels of activation, as illustrated by colored regions of increasing size. Participants were told, “Scientists have shown that increased activity in certain brain regions corresponds with higher quality explanations.” This dependent variable was designed to unambiguously emphasize one’s brain as the location of personally held knowledge. Participants were then asked to select the image that would correspond with their brain activity when they answered the self-assessed knowledge questions.

The subjects who searched the net before the task rated their anticipated brain activity as being significantly stronger than did the control group who hadn’t been looking up information online.

Similar misperceptions may be produced by consulting other external, or “transactive,” sources of knowledge, the researchers note, but the illusion is probably much stronger with the web, given its unprecedented scope and accessibility:

This illusion of knowledge might well be found for sources other than the Internet: for example, an expert librarian may experience a similar illusion when accessing a reference Rolodex. … While such effects may be possible, the rise of the Internet has surely broadened the scope of this effect. Before the Internet, there was no similarly massive, external knowledge database. People relied on less immediate and accessible inanimate stores of external knowledge, such as books—or, they relied on other minds in transactive memory systems. In contrast with other sources and cognitive tools for informational access, the Internet is nearly always accessible, can be searched efficiently, and provides immediate feedback. For these reasons, the Internet might become even more easily integrated with the human mind than other external sources of knowledge and perhaps even more so than human transactive memory partners, promoting much stronger illusions of knowledge.

This is just one study, but it comes on the heels of a series of other studies on how access to the web and search engines is influencing the way our minds construct, or don’t construct, personal knowledge. A 2011 Columbia study found that the ready availability of online information reduces people’s retention of facts: “when people expect to have future access to [online] information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it,” a phenomenon which indicates “that processes of human memory are adapting to the advent of new computing and communication technology.” A 2014 Fairfield University study found that simply taking digital photographs of an experience will tend to reduce your memory of the experience. The University of Colorado’s Adrian Ward has found evidence that the shift from “biological information storage” toward “digital information storage” may “have large-scale and long-term effects on the way people remember and process information.” He says that the internet “may act as a ‘supernormal stimulus,’ hijacking preexisting cognitive tendencies and creating novel outcomes.”

In “How Google Is Changing Your Brain,” a 2013 Scientific American article written with the late Daniel Wegner, Ward reported on experiments revealing that

using Google gives people the sense that the Internet has become part of their own cognitive tool set. A search result was recalled not as a date or name lifted from a Web page but as a product of what resided inside the study participants’ own memories, allowing them to effectively take credit for knowing things that were a product of Google’s search algorithms. The psychological impact of splitting our memories equally between the Internet and the brain’s gray matter points to a lingering irony. The advent of the “information age” seems to have created a generation of people who feel they know more than ever before—when their reliance on the Internet means that they may know ever less about the world around them.

Ignorance is bliss, particularly when it’s mistaken for knowledge.

Image: detail of M. C. Escher’s “Man with Cuboid.”

The robot pharmacist

robot rx

If you want to understand the complexities and pitfalls of automating medicine (and professional work in general), please read Bob Wachter’s story, adapted from his new book The Digital Doctor, of how Pablo Garcia, a 16-year-old patient at the University of California’s San Francisco Medical Center, came to be given a dose of 38 ½ antibiotic pills rather than the single pill he should have been given. (Part 1, part 2, part 3; part 4 will appear tomorrow.) Pretty much every problem with computer automation that I write about in The Glass Cage — automation complacency, automation bias, alert fatigue, overcomplexity, distraction, miscommunication, workload spikes, etc. — is on display in the chain of events that Wachter, himself a physician, describes.

It’s a complicated story, with many players and many moving parts, but I’ll just highlight one crucial episode. After the erroneous drug order enters the hospital’s computerized prescription system, the result of (among other things) a poorly designed software template, the order is transmitted to the hospital’s pill-packaging robot. Whereas a pharmacist or a pharmacy technician would almost certainly have noticed that something was amiss with the order, the robot dutifully packages up the 38 ½ pills as a single dose without a second’s hesitation:

The robot, installed in 2010 at a cost of $7 million, is programmed to pull medications off stocked shelves; to insert the pills into shrink-wrapped, bar-coded packages; to bind these packages together with little plastic rings; and then to send them by van to locked cabinets on the patient floors. “It gives us the first important step in eliminating the potential for human error,” said UCSF Medical Center CEO Mark Laret when the robot was introduced.

Like most robots, UCSF’s can work around the clock, never needing a break and never succumbing to a distraction.

In the blink of an eye, the order for Pablo Garcia’s Septra tablets zipped from the hospital’s computer to the robot, which dutifully collected the 38 ½ Septra tablets, placed them on a half-dozen rings, and sent them to Pablo’s floor, where they came to rest in a small bin waiting for the nurse to administer them at the appointed time. “If the order goes to the robot, the techs just sort it by location and put it in a bin, and that’s it,” [hospital pharmacist] Chan told me. “They eliminated the step of the pharmacist checking on the robot, because the idea is you’re paying so much money because it’s so accurate.”

Far from eliminating human error, the replacement of an experienced professional with a robot ensured that a major error went unnoticed. Indeed, by giving the mistaken dose the imprimatur of a computer, in the form of an official, sealed, bar-coded package, the robot pretty much guaranteed that the dispensing nurse, falling victim to automation bias, would reject her own doubts and give the child all the pills.

The problems with handwritten prescriptions — it’s all too easy to misinterpret doctors’ scribbles, sometimes to fatal effect — are legendary. But solving that very real problem with layers of computers, software templates, and robots introduces a whole new set of problems, most of which are never foreseen by the system’s designers. As is often the case in automating complex processes, the computers and their human partners end up working at cross-purposes, each operating under a different set of assumptions. Wachter explains:

As Pablo Garcia’s case illustrates, many of the new holes in the Swiss cheese weren’t caused by the computer doing something wrong, per se. They were caused by the complex, and under-appreciated, challenges that can arise when real humans — busy, stressed humans with all of our cognitive biases — come up against new technologies that alter the work in subtle ways that can create new hazards.

The lesson isn’t that computers and robots don’t have an important role to play in medicine. The lesson is that automated systems are also human systems. They work best when designed with a painstaking attentiveness to the skills and foibles of human beings. When people, particularly skilled, experienced professionals, are pushed to the sidelines, in the blind pursuit of efficiency, bad things happen.

Pablo Garcia survived the overdose, though not without a struggle.

Twilight of the idylls

bandit

The Silicon Valley guys have a new hobby: driving fast cars around private tracks. They love it. “When you’re really in the zone in a racecar, it’s almost meditative,” Google executive Jeff Huber tells the Times’s Farhad Manjoo. Adds Yahoo senior vice president Jeff Bonforte, “Your brain is so happy that it washes over you.” The Valley guys are a little nervous about the optics of their pastime — “Try to tone down the rich guy hobby thing,” angel investor and ex-Googler Joshua Schachter instructs Manjoo — but the “visceral thrill” of driving has nevertheless made it “the Valley’s ‘it’ hobby.”

The Valley guys are rushing to rent out racetracks and strap themselves into Ferraris at the very moment that they’re telling the rest of us how miserable driving is, and how liberated we’ll all feel when robots take the wheel. Jazzed by a Googler’s Ted Talk on driverless cars, MIT automation expert Andrew McAfee says that the Googlemobile will “free us from a largely tedious task.” Writes Wired transport reporter Alex Davies, “Liberated from the need to keep our hands on the wheel and eyes on the road, drivers will become riders with more time for working, leisure, and staying in touch with loved ones.” When Astro Teller, head of Google X, watches people drive by in their cars, all he hears is a giant sucking sound, as potentially productive minutes pour down the drain of a vast timesink. “There’s over a trillion dollars of wasted time per year we could collectively get back if we didn’t have to pay attention while the car took us from one place to another,” he said in a South by Southwest keynote this month.

Driving on a private track may be pleasantly meditative, even joy-inducing, but driving on public thoroughfares is just a drag.

What’s curious here is that the descriptions of everyday driving offered with such confidence by the avatars of driverlessness are at odds with what we know about people’s actual attitudes toward and experience of driving. People like to drive. Surveys and other research consistently show that most of us enjoy being behind the wheel. We find driving relaxing and fun and even, yes, liberating — a respite from the demands of our workaday lives. Seeing driving as a “problem” because it prevents us from being productive gets the story backwards. What’s freeing about driving is the very fact that it gives us a break from the pressure to be productive.

That doesn’t mean we’re blind to automotive miseries. When researchers talk to people about driving, they hear plenty of complaints about traffic jams and grinding commutes and bad roads and parking hassles and all the rest. Our attitudes toward driving are complex, always have been, but on balance we like to have our hands on the wheel and our eyes on the road, not to mention our foot on the gas. About 70 percent of Americans say they “like to drive,” while only about 30 percent consider it “a chore,” according to a 2006 Pew survey. A survey of millennials, released earlier this year by MTV, found that, contrary to common wisdom, most young people enjoy cars and driving, too. Seventy percent of Americans between the ages of 18 and 34 say they like to drive, and 72 percent of them say they’d rather give up texting for a week than give up their car for the same period. The percentage of people who like to drive has fallen a bit in recent years as traffic has worsened — 80 percent said they liked to drive in a 1991 Pew survey — but it’s still very high, and it belies the dreary picture of driving painted by Silicon Valley. You don’t have to be wealthy enough to buy a Porsche or to rent out a racetrack to enjoy the meditative and active pleasures of driving. They can be felt on the open road as well as the closed track.

In suggesting that driving is no more than a boring, productivity-sapping waste of time, the Valley guys are mistaking a personal bias for a universal truth. And they’re blinding themselves to the social and cultural challenges they’re going to face as they try to convince people to be passengers rather than drivers. Even if all the technical hurdles to achieving perfect vehicular automation are overcome — and despite rosy predictions, that remains a sizable if — the developers and promoters of autonomous cars are going to discover that the psychology of driving is far more complicated than they assume and far different from the psychology of being a passenger. Back in the 1970s, the public rebelled, en masse, when the federal government, for seemingly solid safety and fuel-economy reasons, imposed a national 55-mile-per-hour speed limit. The limit was repealed. If you think that everyone’s going to happily hand the steering wheel over to a robot, you’re probably delusional.

There’s something bigger going on here, and I confess that I’m still a little fuzzy about it. Silicon Valley seems to have a good deal of trouble appreciating, or even understanding, what I’ll term informal experience. It’s only when driving is formalized — removed from everyday life, transferred to a specialized facility, performed under a strict set of rules, and understood as a self-contained recreational event — that it can be conceived of as being pleasurable. When it’s not a recreational routine, when it’s performed out in the world, as part of everyday life, then driving, in the Valley view, can only be understood within the context of another formalized realm of experience: that of productive busyness. Every experience has to be cleanly defined, has to be categorized. There’s a place and a time for recreation, and there’s a place and a time for productivity.

This discomfort with the informal, with experience that is psychologically unbounded, that flits between and beyond categories, can be felt in a lot of the Valley’s consumer goods and services. Many personal apps and gadgets have the effect, or at least the intended effect, of formalizing informal activities. Once you strap on a Fitbit, you transform what might have been a pleasant walk in the park into a program of physical therapy. A passing observation that once might have earned a few fleeting smiles or shrugs before disappearing into the ether is now, thanks to the distribution systems of Facebook and Twitter, encapsulated as a product and subjected to formal measurement; every remark gets its own Nielsen rating.

What’s the source of this crabbed view of experience? I’m not sure. It may be an expression of a certain personality type. It may be a sign of the market’s continuing colonization of the quotidian. I’d guess it also has something to do with the rigorously formal qualities of programming itself. The universality of the digital computer ends — comes to a crashing halt, in fact — where informality begins.

Image: Burt and Sally mix their pleasures in “Smokey and the Bandit.”

History and economics, simplified

Cub economist Marc Andreessen has been thinking again:

simple

I understand that data doesn’t explain everything, but in this particular case I would really, really like to see the data that Marc has assembled to back up his argument.

Evgeny Morozov has a litmus test for technology critics

needle

Evgeny Morozov has written, in The Baffler, a critique of technology criticism in the guise of a review of my book The Glass Cage. He makes many important points, as he always does — Morozov’s intellect is admirably fierce and hungry — but his conclusions about the nature and value of technology criticism are wrong-headed, their implications pernicious. Morozov wants to narrow the sights of such criticism, to declare as invalid any critical approach that isn’t consistent with his own. He wants to establish an ideological litmus test for technology critics. As someone who approaches the question of technology, of tool-making and tool use, from a very different angle than the one Morozov takes, I’d like to respond with a word or two in defense of an open and pluralistic approach to technology criticism, rather than the closed and doctrinaire approach that Morozov advocates.

First, let me deal quickly with the personal barbs that are one of the hallmarks of Morozov’s brand of criticism. At one point in his review, he writes, “Carr doesn’t try very hard to engage his opponents.” This odd remark — there are plenty of people who disagree with me, but I hardly see them as “opponents” — says much about Morozov’s psychology as a critic. He is always trying hard — very, very hard — to “engage his opponents.” You sense, in reading his work, that there is a hot, sweaty wrestling match forever playing out in his mind, and that he can’t stop glancing up at the scoreboard to see where things stand. No opportunity to score a point goes unexploited. It’s this wrestling-match mentality that explains Morozov’s tactic of willfully distorting the ideas of other writers — his “opponents” — in order to make it easier for him to add to his score. The writer Steven Johnson has summed up Morozov’s modus operandi with precision: “He’s like a vampire slayer that has to keep planting capes and plastic fangs on his victims to stay in business.” With Morozov, a fierce intellect and a childish combativeness would seem to be two sides of the same personality, so it’s probably best to ignore the latter and concentrate on the former.

Morozov is disappointed that a “radical” political viewpoint is not more prominent in discussions of the role of technology in American culture. Technology criticism, at least in its popular form, concerns itself mainly, he says, with “what an ethos of permanent disruption means for the configuration of the liberal self or the survival of its landmark institutions, from universities to newspapers.” Radical political approaches to technology criticism, particularly those that place “technology, media, and communications within Marxist analytical frameworks,” go largely unnoticed. That certainly seems an accurate assessment. I think the same could be said about any popular discussion about any aspect of American culture. Marxist analytical frameworks, though prized in academia, cut no ice in the mainstream. Maybe that reflects a flaw in American culture. Maybe it reflects a flaw in the frameworks. Maybe it reflects a bit of both. It is in any case one symptom of the general narrowing of our public debates. In an earlier critique of technology criticism, published in Democracy, Henry Farrell argued that our intellectual life has in general been constrained by a highly competitive “attention economy” that pushes popular debates into a safe middle ground. To be radical, whether from the left or the right (or any other angle), is to be peripheral. The public intellectual has turned into an intellectual entrepreneur, selling a basket of bland ideas to a market of easily distracted consumers.

It’s easy, then, to agree with Morozov that there’s something lacking in contemporary American technology criticism. A broader discussion of technology, one that makes room for and indeed welcomes strongly political points of view, including all manner of radical ones that question the status quo, would enrich the conversation and perhaps give it a greater practical force. Except that that’s not really what Morozov wants. Morozov has come to believe that the only valid technology criticism is political criticism. In fact, he believes that the only valid technology criticism is political criticism that shares his own particular ideology. “Today,” he writes at a crucial juncture in his review, “it’s obvious to me that technology criticism, uncoupled from any radical project of social transformation, simply doesn’t have the goods.” The only critics fit to answer the hard questions about technology are those “who haven’t yet lost the ability to think in non-market and non-statist terms.” Only those with a “progressive agenda,” indeed an “emancipatory political vision,” can pass the litmus test; every one else is ideologically suspect. Morozov, always eager to point out any definitional fuzziness in other people’s vocabularies, doesn’t bother to define precisely what he means by radicalism or progressivism or social transformation or an emancipatory political vision. One assumes, though, that he will be the judge of what is legitimate and what is not. Anyone who doesn’t toe the Morozov line will be branded either a “romantic” or a “conservative” and hence deemed unfit to have a voice in the conversation. “Carr is extremely murky on his own [politics],” Morozov declares at one point, casting aspersion in the form of suspicion.

What particularly galls Morozov is any phenomenological critique of technology, any critical approach that begins by examining the way that the tools people use shape their actual experience of life — their behavior, their perceptions, their thoughts, their relations with others and with the world. The entire tradition of such criticism, a rich and vital tradition that I’m proud to be a part of, is anathema to him, a mere distraction from the political. Just as Morozov ignores my discussion of the politics of progress in The Glass Cage (to acknowledge it might complicate his argument), he blinds himself to the political implications of the work of the phenomenological philosophers. To explore the personal effects of tool use is, he contends, just an elitist waste of time, a frivolous bourgeois pursuit that he reduces to “poring over the texts of some ponderous French or German philosopher.” What we don’t understand we hold in contempt. Because he has no interest in the individual except as an elemental political particle, not a being but an abstraction, Morozov concludes that technology criticism is a zero-sum game. Any time spent on the personal is time not spent on the political. And so, in place of pluralistic inquiry, we get dogmatism. An argument that might have been expansive and welcoming becomes a sneering and self-aggrandizing exercise in exclusion.

“Goodbye to all that,” Morozov writes, grandly dismissing all of his own earlier work that does not fit neatly into his new analytical frameworks. Apparently, he has recently experienced a political awakening. It’s a shame, though not a surprise, that it has come at the cost of an open mind.

Photo: Juan Ramon Martos.

The mosquito

coil

Every time I convince myself that I’ve disabled all sources of automated notifications on all devices, something slips through. The latest was from Twitter, and it took this form:

@soandso retweeted one of your Retweets!

I deleted it with the same alacrity I show in swatting a mosquito about to plunge its proboscis into the capillaries of my forearm. The notification was there, and then it wasn’t there — just the after-image twinkling through the neurons of my visual cortex, hardening into memory.

Although I appreciate Twitter’s fussy approach to capitalization (if not the girl-scout eagerness of its terminal punctuation), the banality of the message strikes me as fundamental:

@soandso retweeted one of your Retweets!

One feels, in reading this, or glancing hungrily across its narrow expanse, as one does with notifications, that one has come to the bedrock of social media. And, more or less by definition, when one comes to the bedrock of social media, one also comes to the bedrock of vanity. Here at last one knows exactly where one stands.

How many times in the course of a day, I wondered, does a notification in this or some similar retweet-of-a-retweet form elbow its way — such tiny elbows! — onto the screen of a device? Is the number in the billions? It must be in the billions. If I close my eyes, I can actually feel the weight of them all. It feels soft, like a down pillow.

@soandso retweeted one of your Retweets!

“Man hands on misery to man,” Philip Larkin wrote. “It deepens like a coastal shelf.” I would be overreaching if I were to suggest that this trifling message, this near-nothingness of intraspecies communication, this wisp of flattery, is a unit of misery. But the melancholy image of the deepening coastal shelf seems apt: all those grains of sand swirling downward through the water and gently coming to rest on the sea floor.

Have I mixed my geological metaphors? So be it. The world shapes itself to our thoughts.

Our intentions? That’s a different matter.

You think you were quick enough. You think you swatted the mosquito before it pierced your skin. And yet now you see the small, pink welt rising on your forearm, and you know that in a matter of seconds you will be scratching it and that there will be pleasure in that act. For vanity is the strongest of forces.

Thank you, @soandso. Thank you for retweeting one of my Retweets. Thank you for your part in it. That you for adding a little something to the whole that is never whole.

Photo: Jo Naylor.