This spring, my book about cloud computing and its consequences, The Big Switch: Rewiring the World, from Edison to Google, will be celebrating the fifth anniversary of its publication. (It’s remarkable to think that in 2008 the term “the cloud” had yet to enter the public mind; much has changed in those five years.) To mark the occasion, W. W. Norton will be releasing a second edition of the book, complete with a new concluding chapter that brings the story up to date. The new edition is available now for preorder through Amazon. Here’s the cover:
The other digital dualism
David Golumbia, author of The Cultural Logic of Computation, describes how the seemingly immaculate materialism of the Singularitarians masks a dualistic view of the mind and the body that would make Descartes proud:
There is a radical, deeply unscientific Cartesianism in singulatarians: they believe mind is special stuff, different from body, despite their apparent overt commitment to a fully materialistic, scientific conception of the world.
This neo-Cartesian conception of the mind predates the Singularitarians, of course. It’s wrapped up in a view of the brain as a computing machine whose logic and data can be abstracted from its physical manifestation. The computer scientist Danny Hillis voiced this view, in stark terms, back in 1992 in an interview with the Whole Earth Review. He said of human beings:
We’re a symbiotic relationship between two essentially different kinds of things. We’re the metabolic thing, which is the monkey that walks around, and we’re the intelligent thing, which is a set of ideas and culture. And those two things have coevolved together, because they helped each other. But they’re fundamentally different things. What’s valuable about us, what’s good about humans, is the idea thing. It’s not the animal thing.
The human consists of that which can be digitized and that which cannot, the logic (or mind) on the one hand and the metabolic machinery (or body) on the other, and these are fundamentally, essentially different things. Mind has no particular dependency on body, at least no more than the program has on the particular computer on which it runs.
What’s striking about the neo-Cartesian, or digital dualist, view is how it manages the neat trick of incorporating both extreme humanism and extreme misanthropy. Since what’s “good” about us is what’s not “the animal thing,” we are given a superior position to all the other animals with whom we share the earth, they being the mere “monkeys that walk around.” This sense of our unique specialness is combined with a deeply misanthropic hatred for the human body, which, by linking us back to mere animals, prevents us from fulling the immortal destiny of pure intelligence. “If I can go into a new body and last for 10,000 years,” said Hillis, “I would do it in an instant.” This view is, needless to say, very close to certain religious conceptions of the body and the soul, though what it lacks is any attempt to put a brake on hubris.
In his critique of the Singularitarian dualism, Golumbia draws a useful distinction between “intelligence” and “mind”:
The use of the term “intelligence” in the fields of AI/Cognitive Science as coterminous with “mind” has always been a red herring. The problems with AI have never been about intelligence: it is obviously the case that machines have become much more intelligent than we are, if we define “intelligence” in the most usual ways: ability to do mathematics, or to access specific pieces of information, or to process complex logical constructions. But they do not have minds–or at least not human minds, or anything much like them. We don’t even have a good, total description of what “mind” is, although both philosophy and some forms of Buddhist thought have good approximations available. Despite singulatarian insistence, we certainly don’t know how to describe “mind” outside of/separately from our bodies.
This is why the Singularitarian program is ultimately fated to fail: the mind is as much the monkey that walks around as it is the “intelligence” that can be abstracted and processed digitally. That gives Golumbia little comfort, however, because he sees the potential for an enormous amount of destruction in the unfettered pursuit of the Singularitarians’ warped humanistic/misanthropic goal—even if that goal is never reached.
Many of the most advanced technologists in corporate America for some reason adhere to this deeply unscientific piece of [dualist] dogma, and pursue unbridled technological progress and the automation of everything because they ‘know’ (following Kurzweil) that it is leading to transcendence — instead of believing the evidence of their own eyes, that it is leading someplace very dark indeed, especially when we reject out of hand — as nearly all Googlers do — that anybody but technologists should decide where technology goes.
Whether Golumbia’s darkest fears are realized, he raises an uncomfortable question: What does it mean for a society to thoughtlessly grant power to those who see the human body as an impediment to transcendence and believe that what’s good about us is what can be replicated by inanimate computers?
UPDATE: On a related note, see Colin McGinn’s review of Kurzweil’s latest book, particularly the discussion of the dangers of thinking that the brain is, literally, an information processor.
Photo by ePsos.
Deep whimsy
Megan Garber opines:
Under e-readers’ influence, the linear project of book-reading – from page 1 to page 501, sequentially – has shifted to something much more chaotic, much more casual, much more accommodating to whimsy and whim.
I’m reminded of Emerson’s winningly perverse advice about reading:
Do not attempt to be a great reader; and read for facts, and not by the bookful. … Stop, if you find yourself becoming absorbed, at even the first paragraph.
Emerson, always seeking to put up another fortification around the self’s besieged keep, worried that if you allowed yourself to become too engrossed in a book, you’d fall under the spell of the writer’s words and that would prevent you from hearing your own inner voice. His was an anxiety of influence.
But Garber’s celebration of magpie reading stems from a not entirely dissimilar place: what is whim if not a deep expression of personal autonomy? Whim is the self at play. And surely magpie reading — a paragraph or two of Austen, a stanza of Heaney, a page of Borges — is a thing to be celebrated. It’s like going through a box of chocolates, each with a different filling.
But is whim really best served by an elaborate mechanism? Is “the linear project” (note to self: good band name) really as unaccommodating of whimsy as Garber suggests? She traces the allegedly whimsy-producing e-reader back to a 16th-century contraption for “book-borne snacking” that looked like this:
Holy mackerel. That looks more like a whimsy-destroying machine. It’s the literary equivalent of a chastity belt. I mean, look at the poor guy’s expression.
As you approach your first Vegas-style dinner buffet, you of course expect it to be more accommodating of whimsy and whim than a meal prepared as a linear project by a single chef and served sequentially, from course 1 to course 4. But when you reel away from the buffet, bloated, gassy, and dissatisfied, all the dishes having blurred together on your palate and in your mind, you realize that what you had taken for whim and whimsy was nothing more than self-indulgence.
Garber points to Moby Dick as being snackworthy. And it’s certainly that—every sentence a bon-bon. But what Garber loses sight of is that to read Moby Dick in its entirety—sequentially, as a linear project—is to let whimsy and whim truly run wild. Has a more whimsical book been written? One of the great things about reading a good book, or enjoying any kind of art, is that our own sense of whim and whimsy gets to be magnified by the artist’s sense of whim and whimsy. There’s whimsy, and then there’s deep whimsy. The former is a cinch; the latter actually takes a little effort, requires a little resistance to the easy but fleeting pleasures of self-indulgence. If the e-reader makes it easier to avoid the linear project, it’s not so much accommodating whim and whimsy as it is rendering them a little less liberating, a little more mundane.
Whim is not synonymous with caprice. The magpie is more capricious than the hawk, but the hawk is infinitely more whimsical than the magpie. Ted Hughes understood that:
The convenience of the high trees!
The air’s buoyancy and the sun’s ray
Are of advantage to me;
And the earth’s face upward for my inspection.My feet are locked upon the rough bark.
It took the whole of Creation
To produce my foot, my each feather:
Now I hold Creation in my foot …
Whim’s most whimsical when it has a will.
Photo by Mafue.
Visions of Barbie
I can’t get Barbie off my mind. It’s not the unnatural quality of her endowments (Barbie was posthuman before posthuman was cool). It’s not the eternal currency of her fashion sense. No, my interest in the doll that other dolls dream about is purely platonic — semiotic, even. It turns out that the story of Barbie is also the story of the web.
It all begins back at the dawn of the millennium, when law professor and cultural theorist Yochai Benkler decided to use a newfangled search engine called Google to search for “Barbie.” In a March 2002 lecture, he reported what he found:
Here is what Google produces when we search for “Barbie”: We see barbie.com, with “Activities and Games for Girls Online!”, and we see barbiebazaar.com, with “Barbie, Barbie dolls, Barbie doll magazine, etc.,” but then very quickly we start seeing sites like adios- barbie.com, “A Body Image Site for Every Body.” We see more Barbie collectibles, but then we see “Armed and Dangerous, Extra Abrasive: Hacking Barbie with the Barbie Liberation Organization.” Further down we see “The Distorted Barbie,” and all sorts of other sites trying to play with Barbie.
This was a very different set of results from what Benkler found when he performed the same search using the most popular search engine of the day, Overture:
What happens when we run the same search on Overture, the search engine used by Go.com, which is the Internet portal produced by Disney? We get “Barbies, New and Preowned” at Internet-doll.com, BarbieTaker wholesale Barbie store, “Toys for All Ages” at Amazon.com, and so on. The Barbie Liberation Organization is nowhere to be found.
The difference in results, Benkler said, reflected a fundamental difference in the workings of the two search engines, Overture selling its rankings in the market and Google producing its rankings through, essentially, a popular vote:
Google ranks search results based on counting “votes,” as it were, that is, based on how many other websites point to a given site. The more people who think your site is sufficiently valuable to link to it, the higher you are ranked by Google’s algorithm. Again, accreditation occurs on a widely distributed model, in this case produced as a byproduct of people building their own websites and linking to others. Overture is a website that has exactly the opposite approach. It ranks sites based on how much the site pays the search engine.
In the Google method, and the results it produced, Benkler saw evidence of “the tremendous potential of the Internet to liberate individual creativity and enrich social discourse by thoroughly democratizing the way we produce information and culture.” This became the theme of Benkler’s 2006 magnum opus, The Wealth of Networks, which described how the rise of online social production “offers individuals a greater participatory role in making the culture they occupy, and makes this culture more transparent to its inhabitants.” Benkler returned to the Barbie example, providing a list of Google’s top ten results for a “Barbie” search:
1. Barbie.com (Mattel’s site)
2. Barbie Collector: Official Mattel Web site for hobbyists and collectors
3. AdiosBarbie.com: A Body Image for Every Body (site created by women critical of Barbie’s projected body image)
4. Barbie Bazaar Magazine (Barbie collectible news and Information)
5. If You Were a Barbie, Which Messed Up Version Would You Be?
6. Visible Barbie Project (macabre images of Barbie sliced as though in a science project)
7. Barbie: The Image of Us All (1995 undergraduate paper about Barbie’s cultural history)
8. Andigraph.free.fr (Barbie and Ken sex animation)
9. Suicide bomber Barbie (Barbie with explosives strapped to waist)
10. Barbies (Barbie dressed and painted as countercultural images)
He proceeded to flesh out the cultural implications of the Barbie search:
A nine-year-old girl searching Google for Barbie will quite quickly find links to AdiosBarbie.com, to the Barbie Liberation Organization (BLO), and to other, similarly critical sites interspersed among those dedicated to selling and playing with the doll. The contested nature of the doll becomes publicly and everywhere apparent, liberated from the confines of feminist-criticism symposia and undergraduate courses. This simple Web search represents both of the core contributions of the networked information economy. First, from the perspective of the searching girl, it represents a new transparency of cultural symbols. Second, from the perspective of the participants in AdiosBarbie or the BLO, the girl’s use of their site completes their own quest to participate in making the cultural meaning of Barbie. The networked information environment provides an outlet for contrary expression and a medium for shaking what we accept as cultural baseline assumptions.
Benkler here makes an important and valuable point about the nature of the web as it existed at the time. But was the web of a decade ago really representative of “the networked information economy”? Was Benkler really seeing the emergence of a new culture, or was he looking at the temporary bloom of a subculture on a new network and mistaking it for a new networked culture?
In 2008, two years after The Wealth of Networks came out, Tom Slee was mulling over Benkler’s contested Barbie, and he decided to see whether anything had changed since Benkler did his search. So Slee entered “Barbie” into the Google search box. The first page of results, he reported, looked radically different from what Benkler had found:
1. Barbie.com — Activities and Games for Girls Online! (together with eight other links to My Scene, Evertythingggirl, Polly Pocket, Kellyclub, and so on).
2. Barbie.co.uk — Activities and Games for Girls Online!
3. Barbie — Wikipedia, the free encyclopedia
4. Barbie Collector – (The official Mattel site for Barbie Collector)
5. Barbie Girls
6. Mattel — Our Toys — Barbie
7. The Distorted Barbie
8. YouTube — barbie girl — aqua
9. Barbie — Barbie Dress up — Fashion for Barbie
10. Barbie.ca [Slee lives in Canada]
There are traces of the contested Barbie here — the “distorted Barbie” site, the YouTube parody video, and the Wikipedia page, which includes critical views of the doll — but, as Slee noted, “this search is basically owned by Mattel.” Google’s results were still generated by an online popular vote, but the popular consensus had shifted. A year and a half later, Slee again googled Barbie. Here’s how the first page of results stacked up:
1.Barbie.com – Activities and Games for Girls Online! (together with eight other links to My Scene, Evertythingggirl, Polly Pocket, Kellyclub, and so on).
2. Barbie.com – Fun and Games
3. Barbie – Wikipedia, the free encyclopedia
4. News results for barbie (with several other links)
5. Barbie Collector – (The official Mattel site for Barbie Collector)
6. Barbie.co.uk – Activities and Games for Girls Online!
7. Barbie.ca
8. Barbie Girls – and a sublink
9. Celebrate 50 Years of Barbie
10. Video results for barbie – with two links to Aqua’s Barbie Girl video
11. Searches related to barbie – all strictly orthodox except for one about Taiwanese actress and singer Barbie Xu
The contested Barbie, far from being “transparent,” has now been relegated to critiques of Barbie within the Wikipedia site. “Yes,” said Slee, “the little girl who searches for Barbie on Google will now encounter a commodity toy.”
Today, another three and a half years having gone by, I googled Barbie again. Here is what I saw:
If Mattel could simply purchase the first page of Google’s search results for Barbie, the page would look pretty much the same, right down to the photo of the doll’s famously ample chest. What’s happened, in other words, is that the Google search engine has come to replicate what the Overture search engine provided ten years ago. Online social production and traditional market production have ended up, in the case of this culturally contested product, producing the same thing!
Now, it’s true that the workings of Google’s search engine have changed over the last ten years. But Google still does a good job of reflecting back to us the popular consensus. It’s showing us how the web sees Barbie, which has become, if anything, even less contested than the way the general culture sees Barbie. What Benkler was seeing back in the early 2000s, we now know, was not the popular networked information economy. He was seeing an early version of the networked information economy that was skewed to the atypical sensibilities of the web’s pioneers. What we see today is a much truer version of a “democratized” information economy, which turns out to be bland, homogenized, and infused with a consumerist ethic. The contested Barbie has been pushed back into “feminist-criticism symposia and undergraduate courses” — back to the offline and online margins. Slee was not quite right when he said that the more recent Google searches for Barbie are “owned by Mattel.” They’re not. They’re owned by us. The distinction, though, is trivial.
Photo by lil’ wiz.
Digital dualism denialism
We talk a lot about “being online” and “being offline” or “going online” and “going offline,” but what do those terms mean? The distinction between online and offline is an outdated holdover from twenty years ago, when “going online,” through America Online or Prodigy or Compuserve, was like “going shopping.” It was an event with clear demarcations, in time and space, and it usually comprised a limited and fairly routinized set of activities. As Net access has expanded, to the point that, for many people, it is coterminous with existence itself, the line between online and offline has become so blurred that the terms have become useless or, worse, misleading. When we talk about being online or being offline these days, we’re deluding ourselves.
That, anyway, is the argument that some writers at the blog Cyborgology have been making over the past couple of years. They’ve been building, in fits and starts, a case against what they call “digital dualism.” The phrase was introduced by Nathan Jurgenson in a post in February 2011. He took umbrage at people’s continuing use of the words “online” and “offline” to describe their experiences, particularly the implication that the online and the offline are separate realms:
Some have a bias to see the digital and the physical as separate; what I am calling digital dualism. Digital dualists believe that the digital world is “virtual” and the physical world “real.” This bias motivates many of the critiques of sites like Facebook and the rest of the social web and I fundamentally think this digital dualism is a fallacy.
He proposed, instead, an “opposite perspective,” which he termed “augmented reality.” The augmented reality view sees “the digital and physical [as] increasingly meshed”:
I am proposing an alternative view that states that our reality is both technological and organic, both digital and physical, all at once. We are not crossing in and out of separate digital and physical realities, ala The Matrix, but instead live in one reality, one that is augmented by atoms and bits.
The observation that “our reality is both technological and organic, both digital and physical,” is banal. I can’t imagine anyone on the planet disagreeing with it. Being natural-born toolmakers, human beings have always lived in a world that is both technological and organic, that is at once natural and, as Thomas Hughes put it, “human-built.” Nor can I imagine that anyone actually believes that the offline and the online exist in immaculate isolation from each other, separated, like Earth and Narnia, by some sort of wardrobe-portal. Jurgenson uses the charge of digital dualism to dismiss a host of very different critiques of digital media, by people like Sherry Turkle, Evgeny Morozov, Jaron Lanier, Mark Bauerlein, and myself, but that seems little more than intellectual stereotyping. It is the “meshing” of the offline and the online, the physical and the digital, that is the fundamental subject and the fundamental concern of pretty much every critical examination of the Net—the generally positive ones as well as the generally negative ones—that I’ve come across. If the two states actually existed in isolation, most of the criticism of digital media would be rendered irrelevant.
Jurgenson came close to conceding this point in a later post in which he presented four “conceptual categories” to describe different ways of viewing “the relationship between the physical and digital”:
Strong Digital Dualism: The digital and the physical are different worlds, have different properties, and do not interact.
Mild Digital Dualism: The digital and physical are different worlds, have different properties, and do interact.
Mild Augmented Reality: The digital and physical are part of one reality, have different properties, and interact.
Strong Augmented Reality: The digital and physical are part of one reality and have the same properties.
As Jurgenson more or less admits, the two extreme categories, perfect separation and perfect sameness, are made of straw. They are purely theoretical constructs, notable for their lack of members. Basically everyone, he grants, agrees that the digital and the physical “have different properties but interact.” So the distinction on which Jurgenson’s digital-dualism theorizing hinges is between those “mild dualists” who see the digital and physical as “different worlds” and those “mild augmentationists” who see the digital and physical as “one reality.” We’ve now entered a realm of very fuzzy semantic distinctions. What the terms “worlds” and “reality” actually denote is not at all clear. As Jurgenson allows, “Sometimes mild dualism and mild augmentation look very similar.” Well, yes. It’s not altogether impossible for “one reality” to encompass “different worlds.” But then, having painted himself into a corner, he leaps out of the corner in order to criticize those who “waffle back and forth across each of these categories.” Given the vagueness of the categories, a bit of waffling seems not only inevitable but wise.
Jurgenson makes his intent clearer in “The IRL Fetish,” an essay he published in The New Inquiry last year. What seems to underpin and inform his critique of digital dualism is his annoyance at people who sentimentalize and “over-valorize” the time they spend offline and make a self-satisfied show of their resistance to going online:
Every other time I go out to eat with a group, be it family, friends, or acquaintances of whatever age, conversation routinely plunges into a discussion of when it is appropriate to pull out a phone. People boast about their self-control over not checking their device, and the table usually reaches a self-congratulatory consensus that we should all just keep it in our pants. … What a ridiculous state of affairs this is. To obsess over the offline and deny all the ways we routinely remain disconnected is to fetishize this disconnection.
Jurgenson is making a valid point here. There is something tiresome about the self-righteousness of those who see, and promote, their devotion to the offline as a sign of their superiority. It’s like those who can’t wait to tell you that they don’t own a TV. But that’s a quirk that has more to do with individual personality than with some general and delusional dualist mentality. Jurgenson’s real mistake is to assume, grumpily, that pretty much everyone who draws a distinction in life between online experience and offline experience is in the grip of a superiority complex or is striking some other kind of pose. That provides him with an easy way to avoid discussing a far more probable and far more interesting interpretation of contemporary behavior and attitudes: that people really do feel a difference and even a conflict between their online experience and their offline experience. They’re not just engaged in posing or fetishization or valorization or some kind of contrived identity game. They’re not faking it. They’re expressing something important about themselves and their lives—something real. Jurgenson doesn’t want to admit that possibility. To him, people are just worshipping a phantom: “The notion of the offline as real and authentic is a recent invention, corresponding with the rise of the online.”
Another Cyborgology writer, David Banks, pushes Jurgenson’s dismissal of people’s sense of a tension between online and offline to an absurd extreme. In a recent post, he observes:
Ever since Nathan posted [his original piece on digital dualism] I have been preoccupied with a singular question: where did this thinking come from? Its too pervasive and readily accepted as truth to be a trendy idea or even a generational divide. Every one of Cyborgology’s regular contributors (and some of our guest authors) hear digital dualist rhetoric coming from their students. The so-called “digital natives” lament their peer’s neglect of the “the real world.” Digital dualism’s roots run deep and can be found at the very core of modern thought. Indeed, digital dualism seems to predate the very technologies that it inaccurately portrays.
If it weren’t for that supercilious “inaccurately,” one might expect, or at least hope, that at this point Banks would take people’s “pervasive” views at face value and would dedicate himself to a deep exploration of why people feel that digital media are eroding their sense of “the real.” Instead, he dismisses people’s concerns. He claims that they’re just reenacting, in a new setting, Rousseau’s view of masturbation as lying outside the natural sexual order:
Rousseau claims at different points in his Confessions that masturbation is a supplement to nature: something constructed or virtual that competes with an existing real or natural phenomenon. Derrida, in his Of Grammatology asserts that erotic thoughts not only precede sexual action (you think about what you do before you do it) but that there is no basis for finding sex any more “real” than auto-affective fantasies. This “logic of the supplement” mistakes something that was “always already” there with an unneeded addition.
That’s an awfully tortured way of denying the obvious: The reason people struggle with the tension between online experience and offline experience is because there is a tension between online experience and offline experience, and people are smart enough to understand, to feel, that the tension does not evaporate as the online intrudes ever further into the offline. In fact, the growing interpenetration between the two modes of experience—the two states of being—actually ratchets up the tension. We sense a threat in the hegemony of the online because there’s something in the offline that we’re not eager to sacrifice.
In a rejoinder to Jurgenson’s “The IRL Fetish,” Michael Sacasas gently makes the point that Jurgenson, Banks, and the other digital dualism denialists go out of their way to avoid seeing:
Jurgenson’s [assertion] – “There was and is no offline … it has always been a phantom.” – is only partially true. In the sense that there was no concept of the offline apart from the online and that the online, once it appears, always penetrates the offline, then yes, it is true enough. However, this does not negate the fact that while there was no concept of the offline prior to the appearance of the online, there did exist a form of life that we can retrospectively label as offline. There was, therefore, an offline (even if it wasn’t known as such) experience realized in the past against which present online/offline experience can be compared. What the comparison reveals is that a form of consciousness, a mode of human experience is being lost. It is not unreasonable to mourn its passing, and perhaps even to resist it.
Nature existed before technology gave us the idea of nature. Wilderness existed before society gave us the idea of wilderness. Offline existed before online gave us the idea of offline. Grappling with the idea of nature and the idea of wilderness, as well as their contrary states, has been the source of much of the greatest philosophy and art for at least the last two hundred years. We should celebrate the fact that nature and wilderness have continued to exist, in our minds and in actuality, even as they have been overrun by technology and society. There’s no reason to believe that grappling with the online and the offline, and their effects on lived experience and the formation of the self, won’t also produce important thinking and art. As Sacasas implies, the arrival of a new mode of experience provides us with an opportunity to see more clearly an older mode of experience. To do that, though, requires the drawing of distinctions. If we rush to erase or obscure the distinctions, for ideological or other reasons, we sacrifice that opportunity.
Yes, digital dualism can go too far. But the realization of that fact—the fact that the online and the offline are not isolated states; that they together influence and shape our lives, and in ways that can’t always be teased apart—should be a spur to thinking more deeply about people’s actual experience of the online and the offline and, equally important, how they sense that experience. What’s lost? What’s gained? An augmentation, it’s worth remembering, is both part of and separate from that which it is added to. To deny the separateness is as wrongheaded as to deny the togetherness. Digital dualism denialism does not open up new frontiers of critical and creative thought and action. It forecloses them.
Photo by Florian.
Students to e-textbooks: no thanks
Because the horse is not dead, I feel I’m allowed to keep beating it. So: Another study of student attitudes toward paper and electronic textbooks has appeared, and like earlier ones — see here, here, here, for example — it reveals that our so-called digital natives prefer print. The new study, by four researchers at Ryerson University in Toronto, appears in the Journal for Advancement of Marketing Education. “Although advocates of digitized information believe that millennial students would embrace the paperless in-person or online classroom, this is not proving to be the case,” they write, as studies to date find “most students reiterating their preference for paper textbooks.”
They point out that a lot of the research up to now has started “with the assumption that the innovation [in e-textbooks] is an improvement over previous technology”:
Undergraduate students are generally assumed to be skilled in using digital resources for acquiring the knowledge necessary to achieve success in tests and exams. However, researchers often overlook students’ personal beliefs about how they learn and study most effectively. Their resistance to replacing paper textbooks with e-textbooks together with an ongoing desire to be able to print electronic content suggests that paper-based information serves students’ needs better in the educational context.
To explore the reasons for the continuing resistance to digital books, they surveyed and conducted focus groups with current students who have used both e-books and printed books in classes. They found students believe “that the paper textbook remains the superior technology for studying and achieving academic success.” Print’s primary advantage is that it presents “fewer distractions,” the students said: “The paper textbook helps them to avoid the distractions of being on the computer or the Internet, the temptations associated with checking e-mail, Facebook, or surfing the Web for unrelated information.” A second benefit is that printed works encourage deeper study: “Students believe they learn more using the paper textbook versus the e- textbook in part because they are able to study longer with less physical and mental fatigue.”
Students also felt that highlighting and otherwise marking passages can be done more effectively with printed pages than digital ones. Here’s a simple but telling example: “electronic sticky notes, in particular, do not provide the same memory assistance as the paper sticky note. Students feel that they have to remember to purposely search for the electronic sticky note, in contrast to the easily observable paper sticky note.” Students also liked that “they have more choices for when and where they can access” a print book’s content compared with an e-book’s. Finally, the researchers found that “students consider learning and studying to be a personal activity and therefore the decision about which tools to use for learning and studying is unaffected by the opinions of friends.”
The scholars conclude:
This study demonstrates that two factors underpin students’ intention to resist giving up paper textbooks: Facilitates Study Processes and Permanence. The paper textbook is perceived as a critical tool in facilitating students’ learning and study processes. The fluid and dynamic nature of digital content compared to the more consistent and predictable nature of information on paper appears to be a barrier to the acquisition of knowledge for the purpose of assessment. Students perceive paper textbooks as the best format for extended reading and studying and for locating information. Students believe that they learn more when studying from paper textbooks. Moreover, paper textbooks allow students to manage content in whatever way they wish to study the material. …
Students’ reaction to the relative impermanence of electronic content is to continue to resist giving up the paper textbooks. Paper textbooks permit students to have unlimited access to information at any time during a course as well as after the course ends. Moreover, these students have come of age during a time where large organizations increasingly control the students’ access to online content. In the case of paper textbooks, content is controlled by the student and not by publishers or IT developers who continuously make changes to computer hardware or software in order to restrict access to the content.
What’s most revealing about this study is that, like earlier research, it suggests that students’ preference for printed textbooks reflects the real pedagogical advantages they experience in using the format: fewer distractions, deeper engagement, better comprehension and retention, and greater flexibility to accommodating idiosyncratic study habits. Electronic textbooks will certainly get better, and will certainly have advantages of their own, but they won’t replicate the particular advantages inherent to the tangible form of the printed book.
Photo from Univers beeldbank.
Simulating the singularity
Some fear that the Singularity, when it arrives, will render the human race obsolete. Even if we survive, we’ll toil under the jackboots of our gizmos. But there’s also a sunnier view. If the Singularity goes well, we’ll not only live in what Richard Brautigan termed “mutually programming harmony” with our computers, but we’ll be immortal, our essence uploaded into massively redundant databases for eternity. Chief Singularitarian and newly minted Googler Ray Kurzweil has said that he even plans to bring his deceased dad back to life, reanimating his spirit from a few stray strands of DNA and a closetful of mementos.
But what if the Singularity doesn’t arrive? What if the Singularity turns out to be, as Kevin Kelly once argued, a “meaningless” mirage? It may not matter. Software allows us to simulate all sorts of real-world phenomena, and there’s no reason to believe that it won’t allow us to simulate our own post-Singularity immortality. Alan Jacobs points to a new article in the Guardian that describes a forthcoming app called LivesOn, which, by analyzing your social networking activity while you’re alive, will be able to algorithmically replicate that activity in perpetuity after you expire:
The service uses Twitter bots powered by algorithms that analyse your online behaviour and learn how you speak, so it can keep on scouring the internet, favouriting tweets and posting the sort of links you like, creating a personal digital afterlife. As its tagline explains: “When your heart stops beating, you’ll keep tweeting.”
LivesOn was created as a lame, if effective, publicity stunt by a British advertising agency. But the idea is sound. As more and more of our earthly self comes to be defined by our online profiles and postings, our digital garb, then it becomes a relatively easy task for a computer to replicate that self, dynamically and without interruption, after we’re gone. As long as you keep posting, liking, and tweeting, spewing links to funny GIFs and trenchant longform texts, circulating the occasional, digitally fabricated instagram photo or vine video, your friends and acquaintances will never need know that your body has shuffled off the stage. For all social intents and purposes — and what other intents and purposes are there? — you’ll live forever. I update, therefore I am.
Who’s to say, for that matter, that most of the presences on social networks aren’t already dead, their ongoing existences merely simulated by software? Would you really know the difference?
Image: Detail from Parmigianino’s “Self-Portrait in a Convex Mirror.”








