Patience is a network effect

If you want to see how technology shapes the way we perceive the world, just look at the way our experience of time has changed as network speeds have increased. Back in 2006, a famous study of online retailing found that a third of online shoppers (those with broadband connections) would abandon a retailing site if its pages took four seconds or longer to load and that nearly two-thirds of shoppers would bolt if the delay reached six seconds. The finding became the basis for the Four Second Rule: People won’t wait more than about four seconds for a web page to load. In the succeeding six years, the Four Second Rule has been repealed and replaced by the Quarter of a Second Rule. Studies by companies like Google and Microsoft now find that it only takes a delay of 250 milliseconds in page loading for people to start abandoning a site. “Two hundred fifty milliseconds, either slower or faster, is close to the magic number now for competitive advantage on the Web,” Microsoft search guru Harry Shum observed earlier this year. To put that into perspective, it takes about 400 milliseconds for you to blink an eye.

Now, a new study of online video viewing (via GigaOm) provides more evidence of how advances in media and networking technology reduce the patience of human beings. The researchers, Shunmuga Krishnan and Ramesh Sitaraman, studied a huge database from Akamai Technologies that documented 23 million video views by nearly seven million people. They found that people start abandoning a video in droves after a two second delay and that the abandonment rate increases 5.8 percent for every second of further delay:

That won’t come as a surprise to anyone who has experienced the rapid ratcheting up of blood pressure that occurs between the moment a Start button is clicked and the moment a video starts rolling. In fact, the only surprise here is that 10 percent of people seem willing to wait a full 50 seconds for a video to begin. (My only explanation is that those are people who have gone to take a leak.)

More interesting is the study’s finding of a causal link between higher connection speeds and higher abandonment rates:

As we experience faster flows of information online, we become less patient people. This finding has obvious importance to anyone involved in online media and online advertising or in running the data centers and networks used to distribute media and ads. But it also has important implications for how all of us think, socialize, and in general live. If we assume that networks will continue to get faster — a pretty safe bet — then we can also conclude that we’ll become more and more impatient, more and more intolerant of even microseconds of delay between action and response. As a result, we’ll be less likely to experience anything that requires us to wait, that doesn’t provide us with instantaneous gratification.

One thing this study doesn’t tell us — but I would hypothesize as true (based on what I see in myself as well as others) — is that the loss of patience persists even when we’re not online. In other words, digital technologies are training us to be more conscious of and more resistant to delays of all sorts — and perhaps more intolerant of moments of time that pass without the arrival of new stimuli. Because our experience of time is so important to our experience of life, it strikes me that these kinds of technology-induced changes in our perception of delays can have particularly broad consequences.

The computer becomes infrastructure

In the new issue of the venerable New Left Review, Rob Lucas offers a thoroughgoing critique of my work, going back to the 2003 article “IT Doesn’t Matter” (which, incidentally, I began writing ten years ago this month). Here’s a bit from his discussion of my first book, which grew out of that article:

On an abstract political-economic level, the extensive argumentation of Does IT Matter? was a sledgehammer for a rather small nut: it is a truism that no individual company will succeed in securing for itself significant long-term advantage over competitors solely through the purchase of goods that are also available to those same competitors. But the burden of Carr’s book was to provide an integrated economic and historical elaboration of the dynamics through which IT had been increasingly commoditized, making the transition from a prohibitively expensive endeavour for most companies — something only taken on at great risk by particular capitals in pioneering efforts, such as J. Lyons & Co’s late 1940s LEO (Lyons Electronic Office) — to an increasingly standardized, widely available, mass-produced good with a rapidly deflating price tag coupled to its exponentially improving performance. With this commoditization, Carr argued, IT had made the transition from a particular asset of the individual company to something ‘shared’ by companies, a commodity generally available to all. In the process it had become a standard aspect of infrastructure, a prerequisite for most businesses; it was thus clearly meaningless to appeal to IT spending as a primary basis for ‘competitive advantage’.

Much of this story of commoditization could be told at the level of computer hardware, in isolation from other factors: here, there had been a rapid cheapening of goods related to the technical progress exemplified in Moore’s law, and to the standardization in component manufacture represented by companies like Dell. Already by 2000 the cost of data processing had declined by more than 99.9 per cent since the 1960s, while storage was a tiny fraction of its 1950s price. But for Carr, software also had particular characteristics which help to drive the commoditization of IT in general. Since typical production costs were very high and distribution costs very low, software had extraordinary economies of scale, often making the pooling of resources between firms preferable to the development of particularistic in-house technologies. This supplied an economic rationale for the centralization of IT provision by third parties, who could make the most of these economies of scale by serving many clients. But it also provided an economic basis for the programmer’s communitarian ethic, embodied in professional user groups such as IBM’s long-running SHARE. The resulting standardizations of hardware and software meant that IT typically overshot the needs of its users, since technologies developed for the most demanding users tended to get generalized. This in turn put a deflationary pressure on prices, since it was rational for users to opt for cheaper, older or free technologies that were adequate to their needs, rather than wildly exceeding them. And since software was not subject to wear and tear, once it had saturated a market, new profits could only be gleaned by pushing users through an ‘upgrade cycle’, which they often resisted.

Carr viewed IT as infrastructural in the same sense as the railway, telegraph, telephone, electrical grid and highway systems. For Carr, the consolidation of this infrastructural status was a realization of IT’s tendency to be cheapened, standardized and made generally available, issuing ultimately in its conversion into a grid-based utility — the apotheosis of commoditized IT. Increasingly, IT goods — software services, data storage and even computing power itself — would not be purchased as the fixed capital of individual companies, but would be based in vast centralized data centres and delivered as services over the Internet by a handful of very large providers. On this trajectory, IT was following a path previously taken by electricity provision — a historical analogy that Carr would spell out in his next book, The Big Switch.

Keep your party out of my tragedy

Jim Stogdill muses on the emotional and cognitive dissonance produced when realtime messaging systems allow conversation without context:

Since the advent of Twitter I’ve often found myself laughing at funerals, crying at parties, and generally failing time and again to say the right thing. Twitter is so immediate, so of the moment, but it connects people across the globe who may be experiencing very different moments. […]

For many of us even here in the East, Sandy is basically over. We are fortunate. We have power, food in the refrigerator, and a place to brew our coffee. But all over New Jersey and New York this remains far from true. The storm will be millions of people’s primary context for weeks to come. I can’t help but wonder what it must be like to risk a bit of carefully hoarded smart phone battery, while separated from the flood-ravaged street by flight after flight of dark staircase, to take a quick glance at Twitter only to see “OMFG, will Disney put mouse ears on Darth Vader?”

This is hardly a new phenomenon, but the dissonance does seem more intimate now, more in-your-face.

No exit

One of the advantages of embedding culture in nature, of requiring that works of reason and imagination be given physical shape, is that it imposes on artists and thinkers the rigor of form, particularly the iron constraints of a beginning and an ending, and it gives to the rest of us the aesthetic, intellectual, and psychological satisfactions of having a rounded experience, of seeing the finish line in the distance, approaching it, arriving at it. When we’re in the midst of the experience, we may not want it to end, we may dream of being launched into the deep blue air of endlessness, but the dream of endlessness is only possible, only has meaning, because of our knowledge that there is an end, even it is an arbitrary end, the film burning in the projector:

Long before Gutenberg forged his little metal letters in Mainz, the media of writing, being necessarily physical, had clear beginnings and endings. The scroll was more open-ended, more continuous than the tablet that preceded it and the page that followed it, but even a reader rolling through a scroll could see, and feel, the end approaching, had a pretty clear sense of what was left. Beginnings and endings predated the written word, of course — Odysseus returned home — but the forms that writing took in the world reinforced what seems to be our natural desire to start in one particular place and finish in another.

Digital media, particularly hypermedia, blur beginnings and endings. Everything is in the middle. No one other than an absurdist would ask where the web begins and ends; the web goes forever on. This is exciting in a way. When you’re used to having beginnings and endings, removing them can feel liberating. The inventors and promoters of hypertext and hypermedia systems have always celebrated the way they seem to free us from the constraints of form, the way they seem to reflect the open-endedness of thought itself and of knowledge itself. Said Ted Nelson: “Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial.” He did not mean that as a compliment.

But even though we read “forced” and “artificial” as negative terms, there’s much that’s praiseworthy about the forced and the artificial. Civilization is forced and artificial. Culture is forced and artificial. Art is forced and artificial. These things don’t spring from the ground like dandelions. And isn’t one of the distinctive glories of the human mind its ability to impose beginnings and endings on its workings, to carve stories and arguments out of the endless branching flow of thought and impression? Not all containers are jails. Imposing form on the formless may be artificial, but it’s also liberating (not least for giving us walls to batter).

There are, as designer Craig Mod points out in an article on the future of magazines, practical angles here. What should give us pause about the shift from page to screen, Mod argues, is not the loss of paper but the loss of boundaries:

I miss the edges — physical and psychological. I miss the start of reading a print magazine, but mostly, I miss the finish. I miss the satisfaction of putting the bundle down, knowing I have gotten through it all. Nothing left. On to the next thing.

The very design of a physical magazine tells a story (sequential and, yes, hierarchical), from cover to table of contents to front matter to features to the last page with (typically) its little valedictory essay, textual or photographic. That’s a hard story to tell when entryways are everywhere and exits are nowhere. When there’s no way out, we get nervous. We start to feel trapped in our freedom:

While a stack of printed back issues of National Geographic may seem intimidating, it is not unapproachable. The magazines may be dense, but you know where you stand as you read them. But what about staring at an empty search box leading into the deep archive of nationalgeographic.com? …

Magazine websites, like the World Wide Web itself, open one up to continuous exploration through links and related content. There’s beauty in that, if one is up for total immersion. But it’s easier to become overwhelmed, or lost. … The question “How deep does it go?” is one that that nobody had to ask the printed edition of Newsweek. Newsweek.com? It’s not so clear. It’s why we love “Most Popular” and “Most E-mailed” lists — they bring some relief of edges to the digital page.

We may yearn for boundlessness, but to be granted it is to be cursed.  “Thought,” wrote Robert Frost, “has a pair of dauntless wings,” but “Love has earth to which she clings / With hills and circling arms about.” The web needs to find its bounds, and its bonds. It needs to come back to earth. That’s the challenge now.

Tout va bien, Jean-Luc Godard, 1973

Signage

“No one knows how to create words and pictures that are meant to be consumed out there in the world,” writes Alexis Madrigal, contemplating the rapid approach of Google Glass and other reality-augmentation wearables. I think Madrigal is giving short shrift to those who ply the signage trade:

Not to mention the graffiti trade:

Then again, I guess the old signs have become part of the reality they augment — as the new signs will, too. Reality augmentation is all about adding new annotations to old annotations of even older annotations.

But Madrigal is right that we have something here that we haven’t seen before. The realtime annotation of the nondigitized environment requires a new kind of art, one that crosses the sensibility of the signmaker with those of the curator, the adman, and the saucier.

To me, in the extremely attention-limited environment of augmented reality, you need a new kind of media. You probably need a new noun to describe the writing. Newspapers have stories. Blogs have posts. Facebook has updates. And AR apps have X.

That’s on the money. But not a new noun — no need to go that far. A recycled noun would work just just fine. My suggestion is “motes.” As in: “Glass just put an awesome mote in my eye.” It comes to us — “mote” does — from an old Dutch word meaning a speck of sawdust or a grain of sand, and, thanks to the King James Bible, it connotes a slight distortion of vision, a little warpage in one’s perception of the real. There’s a social angle as well, courtesy of Rupert Brooke: “One mote of all the dust that’s I / Shall meet one atom that was you.” Actually, forget that: it’s a little morbid for social-networking purposes. Let’s keep our reality augmentation on this side of the grave.

What does the mermaid see when she looks in the mirror?

Where the hell was I? Oh yeah: “Mote” is a tragically underused word. We have an opportunity to right that wrong. Let’s seize it.

[pub sign photo by spatmackrel; graffiti photo by pierremichel.75001]

The ethics of MOOC research

In writing my recent article on massive open online courses, I talked with the leaders of the Big Three in the nascent industry — Coursera, edX, and Udacity — and they all stressed the importance of large-scale data collection and analysis to their plans. By meticulously tracking the actions of students, they hope to build large behavioral data bases that can then be mined for pedagogical insights. The findings, they believe, will help improve particular classes as well as bolster our general understanding of teaching and learning.

The MOOCs’ research agenda seems entirely wholesome. But it does raise some tricky ethical issues, as a correspondent from academia pointed out to me after my article appeared. “At most institutions,” he wrote, the kind of behavioral research the MOOCs are doing “would qualify as research on human subjects, and it would have to be approved and monitored by an institutional review board, yet I have heard nothing about that being the case with this new adventure in technology.” Universities are, for good reason, very careful about regulating, approving, and monitoring biological and behavioral research involving human subjects. In addition to the general ethical issues raised by such studies, there are strict federal regulations governing them. I am no expert on this subject, but my quick reading of some of the federal regulations suggests that certain kinds of purely pedagogical research are exempt from the government rules, and it may well be that the bulk of the MOOC research falls into that category.

Nevertheless, given the sensitivities involved, you’d think that schools partnering with the MOOC providers, particularly the for-profit providers, would be giving the research programs a thorough review and demanding some kind of ongoing oversight. Yet if you look at the contract between the University of Michigan and Coursera, a contract that Coursera says is similar to the ones it has with other institutions, you find almost nothing about data and research. There is a section (#14) establishing basic confidentiality safeguards for student data (names, email addresses, test scores), but it doesn’t say anything about research. The only other thing I saw was a short note in an exhibit appended to the contract, which says that Coursera “will administer assessments and make available to University certain aggregate analytics regarding End User behavior and performance, which will include information on any of the following: End User demographics, module usage, aggregate assessment scores (stratified by demographics) and reviews by demographics.” I saw nothing about any review, oversight, or restriction of research programs or of the use of the resulting data.

I also glanced through Coursera’s terms of service. They lay out, in broad terms, the “personal” and “non-personal” information that the company will collect from students. The personal information is mainly used for formal communications with students. The non-personal information is what’s collected for research and other purposes:

When users come to our Site, we may track, collect and aggregate Non-Personal Information indicating, among other things, which pages of our Site were visited, the order in which they were visited, when they were visited, and which hyperlinks were “clicked.” We also collect information from the URLs from which you linked to our Site. Collecting such information may involve logging the IP address, operating system and browser software used by each user of the Site. Although such information is not Personally Identifiable Information, we may be able to determine from an IP address a user’s Internet Service Provider and the geographic location of his or her point of connectivity. We also use or may use cookies and/or web beacons to help us determine and identify repeat visitors, the type of content and sites to which a user of our Site links, the length of time each user spends at any particular area of our Site, and the specific functionalities that users choose to use.

Coursera says it will use the information “in aggregate form to build higher quality, more useful services by performing statistical analyses of the collective characteristics and behavior of our users, and by measuring demographics and interests regarding specific areas of our Site.” But then the company also notes, “We may also use it for other business purposes.” That sounds like carte blanche.

I have no reason to think that Coursera, or any other MOOC, has anything but noble intentions when it comes to data collection and data mining. I certainly believe that the leaders of the companies are motivated by a desire to improve education. But Coursera is a for-profit business, backed by venture capitalists. Sooner or later, it will have to make money, and, given the current excitement in Silicon Valley and elsewhere about the commercial potential of “Big Data,” it seems inevitable that the company and its investors will explore “other business purposes” for its data, including ones that would bring in revenues.

In their excitement to join forces with MOOC providers, university administrators and professors may not be giving enough thought to all the data that’s going to be collected and all the research activities that are going to be pursued. It’s an oversight they may come to regret.