Category Archives: Uncategorized

The unbearable unlightness of AI


There is a continuing assumption — a faith, really — that at some future moment, perhaps only a decade or two away, perhaps even nearer than that, artificial intelligence will, by means yet unknown, achieve consciousness. A window will open on the computer’s black box, and light will stream in. The universe will take a new turn, as the inanimate becomes, for a second time, animate.

George Lakoff, the linguist who cowrote Metaphors We Live By, says it ain’t going to happen. In a fascinating article by Michael Chorost, Lakoff argues not only that language, being essentially metaphorical, is inextricably bound up in our bodily existence, but that cognition and consciousness, too, flow from our experience as creatures on the earth. Recent neuroscience experiments seem to back Lakoff up. They suggest that even our most abstract thoughts involve the mental simulation of physical experiences.

Writes Chorost:

In a 2011 paper in the Journal of Cognitive Neuroscience, Rutvik Desai, an associate professor of psychology at the University of South Carolina, and his colleagues presented fMRI evidence that brains do in fact simulate metaphorical sentences that use action verbs. When reading both literal and metaphorical sentences, their subjects’ brains activated areas associated with control of action. “The understanding of sensory-motor metaphors is not abstracted away from their sensory-motor origins,” the researchers concluded.

Textural metaphors, too, appear to be simulated. That is, the brain processes “She’s had a rough time” by simulating the sensation of touching something rough. Krish Sathian, a professor of neurology, rehabilitation medicine, and psychology at Emory University, says, “For textural metaphor, you would predict on the Lakoff and Johnson account that it would recruit activity- and texture-selective somatosensory cortex, and that indeed is exactly what we found.”

The evidence points to a new theory about the source of consciousness:

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there.

That, as Chorost notes, “raises problems for artificial intelligence”:

Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: “It kills it.” Of Ray Kurzweil’s singularity thesis, he says, “I don’t believe it for a second.” Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

Then again, even the algorithmic thinking of computers has a physical substrate. There is no software without hardware. The problem is that computers, unlike animals, have no sensory experience of their own existence. They are, or at least appear to be, radically dualist in their operation, their software oblivious to their hardware. If a computer could think metaphorically, what kind of metaphors would it come up with? It’s hard to imagine they’d be anything recognizable to humans.

Image: “Camera Obscura Test 2” by Jon Lewis.


Filed under Uncategorized

The manipulators


In “The Manipulators,” a new essay in the Los Angeles Review of Books, I explore two much-discussed documents published earlier this year: “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks” by Adam Kramer et al. and “Judgment in Case C-131/12: Google Spain SL, Google Inc v Agencia Espanola de Proteccion de Datos, Mario Costeja Gonzalez” by the Court of Justice of the European Union. The latter, I argue, helps us make sense of the former. Both challenge us to think afresh about the past and the future of the net.

Here’s how the piece begins:

Since the launch of Netscape and Yahoo twenty years ago, the development of the internet has been a story of new companies and new products, a story shaped largely by the interests of entrepreneurs and venture capitalists. The plot has been linear; the pace, relentless. In 1995 came Amazon and Craigslist; in 1997, Google and Netflix; in 1999, Napster and Blogger; in 2001, iTunes; in 2003, MySpace; in 2004, Facebook; in 2005, YouTube; in 2006, Twitter; in 2007, the iPhone and the Kindle; in 2008, Airbnb; in 2010, Instagram; in 2011, Snapchat; in 2012, Coursera; in 2013, Google Glass. It has been a carnival ride, and we, the public, have been the giddy passengers.

This year something changed. The big news about the net came not in the form of buzzy startups or cool gadgets but in the shape of two dry, arcane documents. One was a scientific paper describing an experiment in which researchers attempted to alter the moods of Facebook users by secretly manipulating the messages they saw. The other was a ruling by the European Union’s highest court granting citizens the right to have outdated or inaccurate information about them erased from Google and other search engines. Both documents provoked consternation, anger, and argument. Both raised important, complicated issues without resolving them. Arriving in the wake of revelations about the NSA’s online spying operation, both seemed to herald, in very different ways, a new stage in the net’s history — one in which the public will be called upon to guide the technology, rather than the other way around. We may look back on 2014 as the year the internet began to grow up.

Read on.

Image: “Marionettes” by Mario De Carli.


Filed under Uncategorized

Students and their devices


“The practical effects of my decision to allow technology use in class grew worse over time,” writes Clay Shirky in explaining why he’s decided to ban laptops, smartphones, and tablets from the classes he teaches at NYU. “The level of distraction in my classes seemed to grow, even though it was the same professor and largely the same set of topics, taught to a group of students selected using roughly the same criteria every year. The change seemed to correlate more with the rising ubiquity and utility of the devices themselves, rather than any change in me, the students, or the rest of the classroom encounter.”

When students put away their devices, Shirky continues, “it’s as if someone has let fresh air into the room. The conversation brightens, [and] there is a sense of relief from many of the students. Multi-tasking is cognitively exhausting — when we do it by choice, being asked to stop can come as a welcome change.”

It’s been more than ten years now since Cornell’s Helene Hembrooke and Geri Gay published their famous “The Laptop and the Lecture” study, which documented how laptop use reduces students’ retention of material presented in class.* Since then, the evidence of the cognitive toll that distractions, interruptions, and multitasking inflict on memory and learning has only grown. I surveyed a lot of the evidence in my 2010 book The Shallows, and Shirky details several of the more recent studies. The evidence fits with what educational psychologists have long known: when a person’s cognitive load — the amount of information streaming into working memory — rises beyond a certain, quite low threshold, learning suffers. There’s nothing counterintuitive about this. We’ve all experienced cognitive overload and its debilitating effects.

Earlier this year, Dan Rockmore, a computer scientist at Dartmouth, wrote of his decision to ban laptops and other personal computing devices from his classes:

I banned laptops in the classroom after it became common practice to carry them to school. When I created my “electronic etiquette policy” (as I call it in my syllabus), I was acting on a gut feeling based on personal experience. I’d always figured that, for the kinds of computer-science and math classes that I generally teach, which can have a significant theoretical component, any advantage that might be gained by having a machine at the ready, or available for the primary goal of taking notes, was negligible at best. We still haven’t made it easy to type notation-laden sentences, so the potential benefits were low. Meanwhile, the temptation for distraction was high. I know that I have a hard time staying on task when the option to check out at any momentary lull is available; I assumed that this must be true for my students, as well.

As Rockmore followed the research on classroom technology use, he found that the empirical evidence backed up his instincts.

No one would call Shirky or Rockmore a Luddite or a nostalgist or a technophobe. They are thoughtful, analytical scholars and teachers who have great enthusiasm and respect for computers and the internet. So their critiques of classroom computer use are especially important. Shirky, in particular, has always had a strong inclination to leave decisions about computer and phone use up to his students. He wouldn’t have changed his mind without good reason.

Still, even as the evidence grows, there are many teachers who, for a variety of reasons, continue to oppose any restrictions on classroom computer use — and who sometimes criticize colleagues that do ban gadgets as blinkered or backward-looking. At this point, some of the pro-gadget arguments are starting to sound strained. Alexander Reid, an English professor at the University of Buffalo, draws a fairly silly parallel between computers and books:

Can we imagine a liberal arts degree where one of the goals is to graduate students who can work collaboratively with information/media technologies and networks? Of course we can. It’s called English. It’s just that the information/media technologies and networks take the form of books and other print media. Is a book a distraction? Of course. Ever try to talk to someone who is reading a book? What would you think of a student sitting in a classroom reading a magazine, doodling in a notebook or doing a crossword puzzle? However, we insist that students bring their books to class and strongly encourage them to write.

Others worry that putting limits on gadget use, even if justified pedagogically, should be rejected as paternalistic. Rebecca Schuman, who teaches at Pierre Laclede Honors College, makes this case:

My colleagues and I joke sometimes that we teach “13th-graders,” but really, if I confiscate laptops at the door, am I not creating a 13th-grade classroom? Despite their bottle-rocket butt pranks and their 10-foot beer bongs, college students are old enough to vote and go to war. They should be old enough to decide for themselves whether they want to pay attention in class — and to face the consequences if they do not.

A related point, also made by Schuman, is that teachers, not computers, are ultimately to blame if students get distracted in class:

You want students to close their machines and pay attention? Put them in a smaller seminar where their presence actually registers and matters, and be engaging enough — or, in my case, ask enough questions cold — that students aren’t tempted to stick their faces in their machines in the first place.

The problem with blaming the teacher, or the student, or the class format — the problem with treating the technology as a neutral object — is that it ignores the way software and social media are painstakingly designed to exploit the mind’s natural inclination toward distractedness. Shirky makes this point well, and I’ll quote him here at some length:

Laptops, tablets and phones — the devices on which the struggle between focus and distraction is played out daily — are making the problem progressively worse. Any designer of software as a service has an incentive to be as ingratiating as they can be, in order to compete with other such services. “Look what a good job I’m doing! Look how much value I’m delivering!”

This problem is especially acute with social media, because . . . social information is immediately and emotionally engaging. Both the form and the content of a Facebook update are almost irresistibly distracting, especially compared with the hard slog of coursework. (“Your former lover tagged a photo you are in” vs. “The Crimean War was the first conflict significantly affected by use of the telegraph.” Spot the difference?)

Worse, the designers of operating systems have every incentive to be arms dealers to the social media firms. Beeps and pings and pop-ups and icons, contemporary interfaces provide an extraordinary array of attention-getting devices, emphasis on “getting.” Humans are incapable of ignoring surprising new information in our visual field, an effect that is strongest when the visual cue is slightly above and beside the area we’re focusing on. (Does that sound like the upper-right corner of a screen near you?)

The form and content of a Facebook update may be almost irresistible, but when combined with a visual alert in your immediate peripheral vision, it is—really, actually, biologically—impossible to resist. Our visual and emotional systems are faster and more powerful than our intellect; we are given to automatic responses when either system receives stimulus, much less both. Asking a student to stay focused while she has alerts on is like asking a chess player to concentrate while rapping their knuckles with a ruler at unpredictable intervals.

A teacher has an obligation not only to teach but to create, or at least try to create, a classroom atmosphere that is conducive to the work of learning. Ignoring technology’s influence on that atmosphere doesn’t do students any favors. Here’s some of what Anne Curzan, a University of Michigan English professor, tells her students when she explains why she doesn’t want them to use computers in class:

Now I know that one could argue that it is your choice about whether you want to use this hour and 20 minutes to engage actively with the material at hand, or whether you would like to multitask. You’re not bothering anyone (one could argue) as you quietly do your email or check Facebook. Here’s the problem with that theory: From what we can tell, you are actually damaging the learning environment for others, even if you’re being quiet about it. A study published in 2013 found that not only did the multitasking student in a classroom do worse on a postclass test on the material, so did the peers who could see the computer. In other words, the off-task laptop use distracted not just the laptop user but also the group of students behind the laptop user. (And I get it, believe me. I was once in a lecture where the woman in front of me was shoe shopping, and I found myself thinking at one point, “No, not the pink ones!” I don’t remember all that much else about the lecture.)

Our attention is governed not just by our will but by our environment. That’s how we’re built.

I suspect the debate over classroom computer use has become a perennial one, and that it will blossom anew every September. That’s good, as it’s an issue that deserves ongoing debate. But there is a point on which perhaps everyone can agree, and from that point of agreement might emerge constructive action. It’s a point about design, and Shirky gets at it in his article:

The fact that hardware and software is being professionally designed to distract was the first thing that made me willing to require rather than merely suggest that students not use devices in class. There are some counter-moves in the industry right now — software that takes over your screen to hide distractions, software that prevents you from logging into certain sites or using the internet at all, phones with Do Not Disturb options — but at the moment these are rear-guard actions. The industry has committed itself to an arms race for my students’ attention, and if it’s me against Facebook and Apple, I lose.

Computers and software can be designed in many different ways, and the design decisions will always reflect the interests of the designers (or their employers). Beyond the laptops-or-no-laptops-debate lies a broader and more important discussion about how computer technology has come to be designed — and why.

*This post, and the other posts cited within it, concerns the use of personal computing devices in classes in which those devices have not been formally incorporated as teaching aids. There are, of course, plenty of classes in which computers are built into the teaching plan. It’s perhaps noteworthy, though, to point out that, in the “Laptop and Lecture” study, students who used their laptops to look at sites relevant to the class actually did even worse on tests of retention than did students who used their computers to look at irrelevant sites.

Image: “Viewmaster” by Geof Wilson.


Filed under Uncategorized

Speak, algorithm


Lost in yesterday’s coverage of the Apple Watch was a small software feature that, when demonstrated on the stage of the Flint Center, earned brief but vigorous applause from the audience. It was the watch’s ability to scan incoming messages and suggest possible responses. The Verge’s live-blogging crew were wowed:


The example Apple presented was pretty rudimentary. The incoming message included the question “Are you going with Love Shack or Wild Thing?” To which the watch suggested three possible answers: Love Shack, Wild Thing, Not Sure. Big whoop. In terms of natural language processing, that’s like Watson with a lobotomy.

But it was just a taste of a much more sophisticated “predictive text” capability, called QuickType, that Apple has built into the latest version of its smartphone operating system. “iOS 8 predicts what you’ll say next,” explains the company. “No matter whom you’re saying it to.”

Now you can write entire sentences with a few taps. Because as you type, you’ll see choices of words or phrases you’d probably type next, based on your past conversations and writing style. iOS 8 takes into account the casual style you might use in messages and the more formal language you probably use in Mail. It also adjusts based on the person you’re communicating with, because your choice of words is likely more laid back with your spouse than with your boss.

Now, this may all turn out to be a clumsy parlor trick. If the system isn’t adept at mimicking a user’s writing style and matching it to the intended recipient — if it doesn’t nail both text and context — the predictive-text feature will rarely be used, except for purposes of making “stupid robot” jokes. But if the feature actually turns out to be “good enough” — or if our conversational expectations devolve to a point where the automated messages feel acceptable — then it will mark a breakthrough in the automation of communication and even thought. We’ll begin allowing our computers to speak for us.

Is that a development to be welcomed? It seems more than a little weird that Apple’s developers would get excited about an algorithm that will converse with your spouse on your behalf, channeling the “laid back” tone you deploy for conjugal chitchat. The programmers seem to assume that romantic partners are desperate to trade intimacy for efficiency. I suppose the next step is to get Frederick Winslow Taylor to stand beside the marriage bed with a stopwatch and a clipboard. “Three caresses would have been sufficient, ma’am.”

In The Glass Cage, I argue that we’ve embraced a wrong-headed and ultimately destructive approach to automating human activities, and in Apple’s let-the-software-do-the-talking feature we see a particularly disquieting manifestation of the reigning design ethic. Technical qualities are given precedence over human qualities, and human qualities come to be seen as dispensable.

When we allow ourselves to be guided by predictive algorithms, in acting, speaking, or thinking, we inevitably become more predictable ourselves, as Rochester Institute of Technology philosopher Evan Selinger pointed out in discussing the Apple system:

Predicting you is predicting a predictable you. Which is itself subtracting from your autonomy. And it’s encouraging you to be predictable, to be a facsimile of yourself. So it’s a prediction and a nudge at the same moment.

It’s a slippery slope, and it becomes more slippery with each nudge. Predicted responses begin to replace responses, simply because it’s a little more efficient to simulate a response —a thought, a sentence, a gesture — than to undertake the small amount of work necessary to have a response. And then that small amount of work begins to seem like a lot of work — like correcting your own typos rather than allowing the spellchecker to do it. And then, as original responses become rarer, the predictions become predictions based on earlier predictions. Where does the algorithm end and the self begin?

And if we assume that the people we’re exchanging messages with are also using the predictive-text program to formulate their responses . . . well, then things get really strange. Everything becomes a parlor trick.

Image: Thomas Edison’s talking doll.


Filed under Uncategorized

Apple’s small big thing


Over at the Time site, I have a short commentary on the Apple Watch. It begins:

Many of us already feel as if we’re handcuffed to our computers. With its new smart watch, unveiled today in California, Apple is hoping to turn that figure of speech into a literal truth.

Apple has a lot riding on the diminutive gadget. It’s the first major piece of hardware the company has rolled out since the iPad made its debut four years ago. It’s the first new product to be designed under the purview of fledgling CEO Tim Cook. And, when it goes on sale early next year, it will be Apple’s first entry in a much-hyped product category — wearable computers — that has so far fallen short of expectations. Jocks and geeks seem eager to strap computers onto their bodies. The rest of us have yet to be convinced. …

Read on.

(Apple’s live stream of its event today was, by the way, a true comedy of errors. It seemed like the company was methodically going down a checklist of all the possible ways you can screw up a stream, from running audio feeds in different languages simultaneously to bouncing around in time in a way that would have made Billy Pilgrim dizzy.)

Image: Darren Birgenheier.


Filed under Uncategorized

Big Internet


We talk about Big Oil and Big Pharma and Big Ag. Maybe it’s time we started talking about Big Internet.

That thought crossed my mind after reading a couple of recent posts. One was Scott Rosenberg’s piece about a renaissance in the ancient art of blogging. I hadn’t even realized that blogs were a thing again, but Rosenberg delivers the evidence. Jason Kottke, too, says that blogging is once again the geist in our zeit. Welcome back, world.

The other piece was Alan Jacobs’s goodbye to Twitter. Jacobs writes of a growing sense of disillusionment and disappointment with the ubiquitous microblogging platform:

As long as I’ve been on Twitter (I started in March 2007) people have been complaining about Twitter. But recently things have changed. The complaints have increased in frequency and intensity, and now are coming more often from especially thoughtful and constructive users of the platform. There is an air of defeat about these complaints now, an almost palpable giving-up. For many of the really smart people on Twitter, it’s over. Not in the sense that they’ll quit using it altogether; but some of what was best about Twitter — primarily the experience of discovery — is now pretty clearly a thing of the past.

“Big Twitter was great — for a while,” says Jacobs. “But now it’s over, and it’s time to move on.”

These trends, if they are actually trends, seem related. I sense that they both stem from a sense of exhaustion with what I’m calling Big Internet. By Big Internet, I mean the platform- and plantation-based internet, the one centered around giants like Google and Facebook and Twitter and Amazon and Apple. Maybe these companies were insurgents at one point, but now they’re fat and bland and obsessed with expanding or defending their empires. They’ve become the Henry VIIIs of the web. And it’s starting to feel a little gross to be in their presence.

So, yeah, I’m down with this retro movement. Bring back personal blogs. Bring back RSS. Bring back the fun. Screw Big Internet.

But, please, don’t bring back the term “blogosphere.”

Image: still from Lost.


Filed under Uncategorized

Playtators and their fans

Of sport and Men_web

“Man’s failure is yet more intense in the face of the triumph of ineffable things than in the face of heavy things.” —Roland Barthes, What Is Sport?

The videogamer has always been at once player and spectator, in the action and yet removed from it. Watcher and watched, entertainer and entertainee, warrior and couch potato, the videogamer was fated to become the broadcaster of his own amusements, and that makes Twitch and its success — Amazon is buying the game-streaming juggernaut for a billion dollars — something of an inevitability.

As Roland Barthes long ago noted, modern spectator sports usually involve an object that acts as a mediator of the competition: a puck or a ball of some sort. The mediator is the main focus of the violence, which helps keep the bloodshed within civilization’s tolerances and hence suitable for the metamedium of the screen. The videogame, which has as its very field of play a screen, adds further layers of mediation to the already unreal world of the spectator sport. What exactly are we watching when we watch Twitch? We’re watching a screen through a screen, virtual reality twice removed. It would seem to be media all the way down: sport as pure symbol, or, in Platonic terms, pure shadow.

It’s not blood, said Godard; it’s red.

Image: still from the 1961 film Of Sport & Men.

1 Comment

Filed under Uncategorized