Category Archives: Uncategorized

MOOCs and the distance-learning mirage


“I feel like there’s a red pill and a blue pill, and you can take the blue pill and go back to your classroom and lecture your 20 students. But I’ve taken the red pill, and I’ve seen Wonderland.” –Sebastian Thrun, 2012

Now that we’ve begun to talk of MOOCs retrospectively, the time has come to update my previously published survey of the history of hype and wishful thinking that has for more than a century surrounded distance-learning technologies. I am adding a new entry to the list. I suspect it won’t be the last addition.

Mail: Around 1885, Yale professor William Rainey Harper, a pioneer of teaching-by-post, said, “The student who has prepared a certain number of lessons in the correspondence school knows more of the subject treated in those lessons, and knows it better, than the student who has covered the same ground in the classroom.” Soon, he predicted, “the work done by correspondence will be greater in amount than that done in the class-rooms of our academies and colleges.”

Phonograph: In an 1878 article on “practical uses of the phonograph,” the New York Times predicted that the phonograph would be used “in the school-room in training children to read properly without the personal attention of the teacher; in teaching them to spell correctly, and in conveying any lesson to be acquired by study and memory. In short, a school may almost be conducted by machinery.”

Movies: “It is possible to teach every branch of human knowledge with the motion picture,” proclaimed Thomas Edison in 1913. “Our school system will be completely changed in 10 years.”

Radio: In 1927, the University of Iowa declared that “it is no imaginary dream to picture the school of tomorrow as an entirely different institution from that of today, because of the use of radio in teaching.”

TV: “During the 1950s and 1960s,” report education scholars Marvin Van Kekerix and James Andrews, “broadcast television was widely heralded as the technology that would revolutionize education.” In 1963, an official with the National University Extension Association wrote that television provided an “open door” to transfer “vigorous and vital learning” from campuses to homes.

Computers: “There won’t be schools in the future,” wrote MIT’s Seymour Papert in 1984. “I think the computer will blow up the school. That is, the school defined as something where there are classes, teachers running exams, people structured into groups by age, following a curriculum — all of that.”

Web 1.0: The arrival of the web brought the e-learning fad of the late 1990s, as universities and corporations rushed to invest in online courses. In 1999, Cisco CEO John Chambers told the Times‘s Thomas Friedman, “The next big killer application for the Internet is going to be education. Education over the Internet is going to be so big, it’s going to make e-mail usage look like a rounding error.”

MOOCs: The New York Times declared 2012 the “the year of the MOOC.” “Welcome to the college education revolution,” wrote the ever-hopeful Friedman in a column heralding massive open online courses. “In five years this will be a huge industry.” The MOOC “is transforming higher education,” declared the Economist, “threatening doom for the laggard and mediocre.” Academics were equally bedazzled. “There’s a tsunami coming,” said Stanford president John Hennessy. Opined MIT president Rafael Reif: “I am convinced that digital learning is the most important innovation in education since the printing press.” Harvard’s Clayton Christensen predicted “wholesale bankruptcies” among traditional universities.

All of these mediums and devices have come to play valuable roles in education and training — which is something worth celebrating — but none of them turned out to be revolutionary or transformative. There may be a deeper lesson here, a lesson about how easy it is to overlook the intangible virtues not just of classrooms but of presence.

Image: detail of John Tenniel illustration for 1865 edition of Alice in Wonderland.


Filed under Uncategorized

Oh no! Robots! Yay!


“This future man, whom the scientists tell us they will produce in no more than a hundred years, seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself.” –Hannah Arendt, 1958

“Human beings are ashamed to have been born instead of made.” –Günther Anders, 1956

Now that we’ve branded every consumer good with a computer chip “smart,” the inevitable next step is for robots to start thinking big thoughts, turn us into their menials, and mind-meld into a higher form of life, or lifeyness. Or so we’re told by an (oddly enthusiastic) chorus of putatively rational doomsayers. Forget dirty bombs, climate change, and rogue microbes. AI is now the greatest existential threat to humanity.

Pardon me for yawning. The odds of computers becoming thoughtful enough to decide they want to take over the world, hatch a nefarious plan to do so, and then execute said plan remain exquisitely small. Yes, it’s in the realm of the possible. No, it’s not in the realm of the probable. If you want to worry about existential threats, I would suggest that the old-school Biblical ones — flood, famine, pestilence, plague, war — are still the best place to set your sights.

Rob Walker interviewed me about The Glass Cage for Yahoo Tech, and we touched on this topic:

You don’t spend much time on the idea that the march of artificial intelligence is “summoning the demon” that will destroy humanity, as Elon Musk recently worried aloud. And he’s not the only smart person to frame the issue in apocalyptic, sci-fi terms; it’s become an almost trendy fear. What do you make of that?

It’s probably overblown. All those apocalyptic AI fears are based on an assumption that computers will achieve consciousness, or at least some form of self-awareness. But we have yet to see any evidence of that happening, and because we don’t even know how our own minds achieve consciousness, we have no reliable idea of how to go about building self-aware machines.

There seem to be two theories about how computers will attain consciousness. The first is that computers will gain so much speed and so many connections that consciousness will somehow magically “emerge” from their operations. The second is that we’ll be able to replicate the neuronal structure of our own brains in software, creating an artificial mind.

Now, it’s possible that one of those approaches might work, but there’s no rational reason to assume they’ll work. They’re shots in the dark. Even if we’re able to construct a complete software model of a human brain — and that itself is far from a given — we can’t assume that it will actually function the way a brain functions. The mind may be more than a data-processing system, or at least more than one that can be transferred from biological components to manufactured ones.

The people who expect a “singularity” of machine consciousness to happen in the near future — whether it’s Elon Musk or Ray Kurzweil or whoever — are basing their arguments on faith, not reason. I’d argue that the real threat to humanity is our own misguided tendency to put the interests of technology ahead of the interests of people and other living things.

You can read the whole interview here.

Image: still from Andrei Tarkovsky’s Solaris.


Filed under Uncategorized

A.I. and the new deskilling wave


I have an essay in tomorrow’s Wall Street Journal in which I examine how an overdependence on software is sapping the talents of professionals and argue for a more humanistic approach to programming and automation. The piece begins:

Artificial intelligence has arrived. Today’s computers are discerning and sharp. They can sense the environment, untangle knotty problems, make subtle judgments and learn from experience. They don’t think the way we think—they’re still as mindless as toothpicks—but they can replicate many of our most prized intellectual talents. Dazzled by our brilliant new machines, we’ve been rushing to hand them all sorts of sophisticated jobs that we used to do ourselves.

But our growing reliance on computer automation may be exacting a high price. Worrisome evidence suggests that our own intelligence is withering as we become more dependent on the artificial variety. Rather than lifting us up, smart software seems to be dumbing us down. …

Read on.

Image by aneequs.


Filed under Uncategorized

When Roombas kill


Jenny Shank interviews me about The Glass Cage over at MediaShift. The conversation gets into some topics that haven’t been covered much elsewhere, including my suggestion that Roomba, the automated vacuum cleaner, provides an early and ever so slightly ominous example of robot morality (or lack thereof). “Roomba makes no distinction between a dust bunny and an insect,” I write in the book. “It gobbles both, indiscriminately. If a cricket crosses its path, the cricket gets sucked to its death. A lot of people, when vacuuming, will also run over the cricket. They place no value on a bug’s life, at least not when the bug is an intruder in their home. But other people will stop what they’re doing, pick up the cricket, carry it to the door, and set it loose. … When we set Roomba loose on a carpet, we cede to it the power to make moral choices on our behalf.”

Here’s the relevant bit from the interview:

Shank: “The Glass Cage” made explicit for me a number of problems with automation that I had been vaguely worried about. But one thing that I had never worried about until reading “The Glass Cage” was the morality of the Roomba. You write, “Roomba makes no distinction between a dust bunny and an insect.” Why is it so easy to overlook the fact, as I did, that when a Roomba vacuums indiscriminately, it’s following a moral code?

Carr: It’s easier not to think about it, frankly. The workings of automated machines often raise tricky moral questions. We tend to ignore those gray areas in order to enjoy the conveniences the machines provide without suffering any guilt. But I don’t think we’re going to be able to remain blind to the moral complexities raised by robots and other autonomous machines much longer. As soon as you allow robots, or software programs, to act freely in the world, they’re going to run up against ethically fraught situations and face hard choices that can’t be resolved through statistical models. That will be true of self-driving cars, self-flying drones, and battlefield robots, just as it’s already true, on a lesser scale, with automated vacuum cleaners and lawnmowers. We’re going to have to figure out how to give machines moral codes even if it’s not something we want to think about.


Image: Juliette Culver.


Filed under Uncategorized