Two ways of looking at a MOOC

In a new essay in Times Higher Education, Oxford emeritus professor Alan Ryan places the MOOC movement into a broader historical and pedagogical context. He ends by sketching out the two “mirror images of a future in which MOOCs are the order of the day”:

The dystopian vision that chills the soul of even the least Luddite among us is of undergraduate education dominated by uniform courses, no doubt put together by wonderful teachers but turning everyone beyond the course builders themselves into something like the monitors of the 19th-century Bell-Lancaster schools, checking their students’ work against a schedule determined elsewhere, with little or no scope for their own pedagogical ideas. “Course delivery”, in the awful idiom of the Quality Assurance Agency, will be almost everyone’s lot. The utopian version is the reverse, the anarchist’s vision of an almost magically decentralised education, with no authority determining course content as everyone listens and responds. It is very reminiscent of the two sides of Marx’s vision of the future – uniformity, efficiency and the elimination of effort on the one hand, and the liberation of imagination and frictionless cooperation on the other.

You’ll have to read the essay to find out which version of the future Ryan sees as the more likely.

Shaping ends to fit the means

Another MOOC-related thought (once you start saying “mooc,” it’s difficult to stop):

One of the people I interviewed for the MOOC article was Swarthmore history professor Timothy Burke, who wrote a perceptive post on MOOCs a few months ago on his blog. In our conversation, Burke pointed out that a lot of the massive online classes that have generated so much excitement over the last year are classes that would be largely taught through computers, anyway — computer science, software programming, and web design classes, notably, as well as some math classes. It’s no surprise that such “digitally native” classes would translate well to a purely digital distribution medium, but that fact tells us little about how successfully other sorts of classes would make such a transition.

A few days ago, Wheaton College English professor Alan Jacobs (whom I also interviewed) touched on this same theme in a brief post about my article. “In many ways,” he wrote, “the problem of technology is the ‘to a man with a hammer everything looks like a nail’ problem”:

Once we discover that some subjects — primarily in mathematics and computer science — can be taught via the technologically-sophisticated MOOC method, then it becomes very tempting to say that the most important educational deficiency we have is in mathematics and computer science. This may or may not be true; but if we have what we believe to be a technological solution to a deficiency, then it becomes very easy for us to convince ourselves that the problem we think we can solve is the problem that really matters.

The means shapes the ends, by influencing both the way we define the problem and the way we define the solution. By coincidence, I read Jacobs’s post the same day I was re-reading a chapter from Langdon Winner’s 1977 book Autonomous Technology. Winner describes a phenomenon he calls reverse adaptation, which he defines as “the adjustment of human ends to match the character of the available means”:

We have already seen arguments to the effect that persons adapt themselves to the order, discipline, and pace of the organizations in which they work. But even more significant is the state of affairs in which people come to accept the norms and standards of technical processes as central to their lives as a whole. A subtle but comprehensive alteration takes place in the form and substance of their thinking and motivation. Efficiency, speed, precise measurement, rationality, productivity, and technical improvement become ends in themselves applied obsessively to areas of life in which they would previously have been rejected as inappropriate. Efficiency — the quest for maximum output per unit input — is, no one would question, of paramount importance in technical systems. But now efficiency takes on a more general value and becomes a universal maxim for all intelligent conduct. Is the most product being obtained for the resources and effort expended? The question is no longer applied solely to such things as assembly-line production. It becomes equally applicable to matters of pleasure, leisure, learning, every instance of human communication, and every kind of activity, whatever its ostensive purpose. Similarly, speed — the rate of perform­ance and swiftness of motion — makes sense as an instrumental value in certain kinds of technological operation. But now speed is taken to be an admirable characteristic in and of itself. The faster is the superior, whatever it may be.

Not only does everything look like a nail to the man with a hammer, but the solution to every problem looks like a nail gun.

The implications go well beyond MOOCs, of course.

The prehistory of the MOOC

In writing my piece on MOOCs for Technology Review, I had the opportunity to do a little digging into the history of distance learning, or, as it was once known, university extension. It’s an illuminating tale, not least for the way it highlights the hopes we invest in new media technologies. Pretty much every new communication system seems to have inspired visions of revolutions in education:

Mail: Around 1885, Yale professor William Rainey Harper, a pioneer of teaching-by-post, said, “The student who has prepared a certain number of lessons in the correspondence school knows more of the subject treated in those lessons, and knows it better, than the student who has covered the same ground in the classroom.” Soon, he predicted, “the work done by correspondence will be greater in amount than that done in the class-rooms of our academies and colleges.”

Phonograph: In an 1878 article on “practical uses of the phonograph,” the New York Times predicted that the phonograph would be used “in the school-room in training children to read properly without the personal attention of the teacher; in teaching them to spell correctly, and in conveying any lesson to be acquired by study and memory. In short, a school may almost be conducted by machinery.”

Movies: “It is possible to teach every branch of human knowledge with the motion picture,” proclaimed Thomas Edison in 1913. “Our school system will be completely changed in 10 years.”

Radio: In 1927, the University of Iowa declared that “it is no imaginary dream to picture the school of tomorrow as an entirely different institution from that of today, because of the use of radio in teaching.”

TV: “During the 1950s and 1960s,” report education scholars Marvin Van Kekerix and James Andrews, “broadcast television was widely heralded as the technology that would revolutionize education.” In 1963, an official with the National University Extension Association wrote that television provided an “open door” to transfer “vigorous and vital learning” from campuses to homes.

Computers: “There won’t be schools in the future,” wrote MIT’s Seymour Papert in 1984. “I think the computer will blow up the school. That is, the school defined as something where there are classes, teachers running exams, people structured into groups by age, following a curriculum — all of that.”

Web 1.0: Internet-based education has itself gone through one boom-to-bust cycle already, with the e-learning fad of the late 1990s. In 1999, Cisco CEO John Chambers told the Times‘s Thomas Friedman, “The next big killer application for the Internet is going to be education. Education over the Internet is going to be so big, it’s going to make e-mail usage look like a rounding error.”

Home study programs, whether delivered through mailboxes or TVs, CD-ROMs or websites, have long played a very important role in expanding access to education and training. They’ve provided millions of people with valuable skills and perspectives that would otherwise have remained out of reach. But, despite more than a century of hope and hype, the technologies of distance learning have had surprisingly little effect on traditional schooling. Colleges, in particular, still look and work pretty much as they always have. Maybe that’s because the right technology hasn’t come along yet. Or maybe it’s because traditional classroom schooling, for all its flaws and inefficiencies, has strengths that we either don’t grasp or are quick to overlook. If it’s the former, then investing money and hopes in new technologies to revolutionize education makes sense. If it’s the latter, it would probably be wiser to identify and wrestle with the particular, and complicated, problems that beset traditional education rather than seeking magic bullets.

For those interested in the history of distance education, here are a couple of useful sources:

The Foundations of American Distance Education: A Century of Collegiate Correspondence Study, a 1991 compendium of articles edited by Barbara L. Watkins and Steven J. Wright.

Digital Diploma Mills: The Automation of Higher Education, by the late David F. Noble, published in 2003. Noble has an extensive chapter on the history of correspondence study, which builds on this 1999 essay.

MOOCs: what’s real, what’s hype

I have an article in the new issue of Technology Review, The Crisis in Higher Education, that looks at the phenomenon of massive open online courses, or MOOCs. The free delivery of college classes over the net is stirring a huge amount of excitement this fall, and the organizations pioneering  the MOOC model — notably, Coursera, Udacity, and edX — are grabbing a lot of attention and investment. But outsized expectations about revolutions in education have accompanied virtually every major new communication medium in the past, including the postal system, motion pictures, radio, TV, and personal computers. (In 1927, the University of Iowa declared that “it is no imaginary dream to picture the school of tomorrow as an entirely different institution from that of today, because of the use of radio in teaching.”) So is today different? Will the combination of breakthroughs in cloud computing, data mining, machine learning, and social networking at last enable distance learning to achieve its grand promise? That’s the question I wrestle with in the article.

Here’s how the piece begins:

A hundred years ago, higher education seemed on the verge of a technological revolution. The spread of a powerful new communication network — the modern postal system — had made it possible for universities to distribute their lessons beyond the bounds of their campuses. Anyone with a mailbox could enroll in a class. Frederick Jackson Turner, the famed University of Wisconsin historian, wrote of how the “machinery” of distance learning would carry “irrigating streams of education into the arid regions” of the country. Sensing a historic opportunity to reach new students and garner new revenues, schools rushed to set up correspondence divisions. By the 1920s, postal courses had become a full-blown mania. Four times as many people were taking them as were enrolled in all the nation’s colleges and universities combined.

The hopes for this early form of distance learning went well beyond the broadening of access. Many educators believed that correspondence courses, by allowing assignments and assessments to be tailored to each student, would actually be better than traditional on-campus instruction. The University of Chicago’s Home-Study Department, one of the nation’s largest, told prospective enrollees that they would “receive individual personal attention,” delivered “according to any personal schedule and in any place where postal service is available.” The department’s director claimed that correspondence study offered students an intimate “tutorial relationship” that “takes into account individual differences in learning.” The education, he said, would prove superior to that delivered in “the crowded classroom of the ordinary American University.”

We’re hearing strikingly similar claims today. Another powerful communication network—the Internet—is again raising hopes of a revolution in higher education. This fall, many of the country’s leading universities, including MIT, Harvard, Stanford, and Princeton, are offering classes for free over the Net, and more than a million people around the world have signed up to take them. These massive open online courses, or MOOCs, are earning praise for bringing outstanding college teaching to multitudes of students who otherwise wouldn’t have access to it, including those in remote places and those in the middle of their careers. The online classes are also being promoted as a way to bolster the quality and productivity of teaching in general — for students on campus as well as off. Former U.S. Secretary of Education William Bennett has written that he senses “an Athens-like renaissance” in the making. Stanford President John Hennessy told the New Yorker he sees “a tsunami coming.” …

Read on.

(Also check out The Library of Utopia, a somewhat related article I wrote for Tech Review earlier this year.)

Google Glass and Claude Glass

A hirsute Sergey Brin made waves earlier this month when he catwalked through New York Fashion Week with a Google Glass wrapped around his bean. It was something of a coming out party for Google’s reality-augmentation device, which promises to democratize the heads-up display, giving us all a fighter pilot’s view of the world. Diane von Furstenberg got Glassed. So did Sarah Jessica Parker. Wendi Murdock seemed impressed by the cyborgian accessory, as did her husband, Rupert, who promptly tweeted, “Genius!” Google Glass is shaping up to be the biggest thing to hit the human brow since Olivia Newton-John’s headband. Let’s get post-physical.

Google is developing a wearable smartphone screen as part of its 'Project Glass' initiative.

[Augmented Reality device, circa 2012]

It’s appropriate that models, designers, and other fashionistas would be among the first to embrace Glass. The fashion industry has always been at the forefront of reality augmentation. But Google’s eye extension is not the first Glassware to come into vogue. In the 18th and 19th centuries, the gadget of choice for trendsetters was the Claude Glass. Named after the popular French landscape painter Claude Lorraine, a Claude Glass was a tinted, convex mirror that ladies and gentlemen would carry around on outings and whip out whenever they wanted to amp up the beauty of a natural scene. As Leo Marx explained in his classic The Machine in the Garden, “When a viewer used the Claude Glass the landscape was transformed into a provisional work of art, framed and suffused by a golden tone like that of the master’s paintings.” The glass “helped create a pastoral illusion.” According to a University of Windsor site dedicated to this early mobile technology, a Claude Glass “essentially edited a natural scene, making its scale and diversity manageable, throwing its picturesque qualities into relief and — crucially — making it much easier to draw and record.”

[Augmented Reality device, circa 1780]

Where a Claude Glass suffused the landscape with a soft painterly light, a Google Glass saturates the landscape with hard data. It gives its owner the eyes not of an artist but of an analyst. Instead of a pastoral illusion, you get a digital illusion. But if the perspectives the two gadgets provide are radically different, the Claude Glass and the Google Glass share some important qualities. Both tell us that our senses are insufficient, that manufactured vision is superior to what our own meager eyes can show us. And both turn the world into a packaged good — a product to be consumed. The Google Glass is superior to the Claude Glass in this regard. Not only does it package an enhanced version of reality, but it annotates it with a profusion of descriptive text and other explanatory symbols — and then, with its camera and its uplinks to social networks, it allows us to share the product. With a Google Glass on our forehead, we’re not just a consumer of augmented reality; we’re a value-added reseller.

Like the Claude Glass before it, the Google Glass reverses the biblical prophecy: “For now we see through a glass, darkly; but then face to face.” Technology reveals the paradise beyond the real — or at least it reveals how we imagine that paradise to be.

[Images: Google Glass: Google promotional photo; Claude Glass: detail from Sophia Delaval, Mrs Jardis, attributed to Edward Alcock, from National Trust]

The posthastism post

Realtime makes history of primetime. Hours before the competition unfolds on TV, I already know that Gaby takes the all-around gold, that Viktoria cries in despair, and that Aly loses the bronze to Aliya on a technicality. The report arrives before the event, mediawise. Prediction: The next Summer Olympics will be broadcast on the History Channel.

Language, like time, warps back on itself, and so we have a new movement — or is it an antimovement? — called posthastism, which in prerealtime, when we had time to think, would probably have been called postposthastism. From an interview with Hans Ulrich Obrist:

In his talk at Tate Modern last week, Tino Sehgal talked a lot about slowness, and how it was a key aspect of the way he engages with the world in his work. As someone known for your hyper-productivity, how do you relate to this idea of slowness?

I’m interested in resisting the homogenization of time: so it’s a matter of making it faster and slower. For art, slowness has always been very important. The experience of seeing art slows us down. Actually, we have just founded a movement with Shumon Basar and Joseph Grima last week called posthastism, where we go beyond haste. Joseph Grima was in Malta, and he had this sudden feeling of posthaste. Shumon and I picked up on it and we had a trialogue, which went on for a week on Blackberry messenger. Posthastism. [Reading from a sheet of paper hastily brought in by his research assistant] As Joseph said: “Periphery is the new epicenter,” “post-Fordism is still hastism because it’s immaterial hastism, which could lead now’s posthastism.” One more thing to quote is “delays are revolutions,” which was a good exhibition title.

Was Joseph Grima really feeling posthaste in Malta or was he experiencing its opposite? “Posthaste” comes from an instruction written on letters a few centuries ago: “Haste, Post, Haste.” Which meant: Get it there quicker than quick. Run, mailman, run! We’ve always yearned for realtime, even when messages moved at footspeed. But now we really have it. #hastetwitterhaste seems unnecessary — an immateriality in an age of immaterial hastism.

Obrist is right, though: realtime is homogenized time and hence needs to be resisted. So sign me up for posthastism, posthaste. “Delays are revolutions”: that’s a slogan I can march under. My manifesto:

— Never respond to a text until at least 24 hours have passed.

— Wait four days or more before replying to an email.

— Tweet about things that happened a month ago.

— Stop your Facebook Timeline at the turn of the last century.

— Watch the Olympics on NBC after dinner.

The revolution, it turns out, will be televised. On tape delay. Viva primetime!

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.