Category Archives: Uncategorized

Two ways of looking at a MOOC

In a new essay in Times Higher Education, Oxford emeritus professor Alan Ryan places the MOOC movement into a broader historical and pedagogical context. He ends by sketching out the two “mirror images of a future in which MOOCs are the order of the day”:

The dystopian vision that chills the soul of even the least Luddite among us is of undergraduate education dominated by uniform courses, no doubt put together by wonderful teachers but turning everyone beyond the course builders themselves into something like the monitors of the 19th-century Bell-Lancaster schools, checking their students’ work against a schedule determined elsewhere, with little or no scope for their own pedagogical ideas. “Course delivery”, in the awful idiom of the Quality Assurance Agency, will be almost everyone’s lot. The utopian version is the reverse, the anarchist’s vision of an almost magically decentralised education, with no authority determining course content as everyone listens and responds. It is very reminiscent of the two sides of Marx’s vision of the future – uniformity, efficiency and the elimination of effort on the one hand, and the liberation of imagination and frictionless cooperation on the other.

You’ll have to read the essay to find out which version of the future Ryan sees as the more likely.

Shaping ends to fit the means

Another MOOC-related thought (once you start saying “mooc,” it’s difficult to stop):

One of the people I interviewed for the MOOC article was Swarthmore history professor Timothy Burke, who wrote a perceptive post on MOOCs a few months ago on his blog. In our conversation, Burke pointed out that a lot of the massive online classes that have generated so much excitement over the last year are classes that would be largely taught through computers, anyway — computer science, software programming, and web design classes, notably, as well as some math classes. It’s no surprise that such “digitally native” classes would translate well to a purely digital distribution medium, but that fact tells us little about how successfully other sorts of classes would make such a transition.

A few days ago, Wheaton College English professor Alan Jacobs (whom I also interviewed) touched on this same theme in a brief post about my article. “In many ways,” he wrote, “the problem of technology is the ‘to a man with a hammer everything looks like a nail’ problem”:

Once we discover that some subjects — primarily in mathematics and computer science — can be taught via the technologically-sophisticated MOOC method, then it becomes very tempting to say that the most important educational deficiency we have is in mathematics and computer science. This may or may not be true; but if we have what we believe to be a technological solution to a deficiency, then it becomes very easy for us to convince ourselves that the problem we think we can solve is the problem that really matters.

The means shapes the ends, by influencing both the way we define the problem and the way we define the solution. By coincidence, I read Jacobs’s post the same day I was re-reading a chapter from Langdon Winner’s 1977 book Autonomous Technology. Winner describes a phenomenon he calls reverse adaptation, which he defines as “the adjustment of human ends to match the character of the available means”:

We have already seen arguments to the effect that persons adapt themselves to the order, discipline, and pace of the organizations in which they work. But even more significant is the state of affairs in which people come to accept the norms and standards of technical processes as central to their lives as a whole. A subtle but comprehensive alteration takes place in the form and substance of their thinking and motivation. Efficiency, speed, precise measurement, rationality, productivity, and technical improvement become ends in themselves applied obsessively to areas of life in which they would previously have been rejected as inappropriate. Efficiency — the quest for maximum output per unit input — is, no one would question, of paramount importance in technical systems. But now efficiency takes on a more general value and becomes a universal maxim for all intelligent conduct. Is the most product being obtained for the resources and effort expended? The question is no longer applied solely to such things as assembly-line production. It becomes equally applicable to matters of pleasure, leisure, learning, every instance of human communication, and every kind of activity, whatever its ostensive purpose. Similarly, speed — the rate of perform­ance and swiftness of motion — makes sense as an instrumental value in certain kinds of technological operation. But now speed is taken to be an admirable characteristic in and of itself. The faster is the superior, whatever it may be.

Not only does everything look like a nail to the man with a hammer, but the solution to every problem looks like a nail gun.

The implications go well beyond MOOCs, of course.

The prehistory of the MOOC

In writing my piece on MOOCs for Technology Review, I had the opportunity to do a little digging into the history of distance learning, or, as it was once known, university extension. It’s an illuminating tale, not least for the way it highlights the hopes we invest in new media technologies. Pretty much every new communication system seems to have inspired visions of revolutions in education:

Mail: Around 1885, Yale professor William Rainey Harper, a pioneer of teaching-by-post, said, “The student who has prepared a certain number of lessons in the correspondence school knows more of the subject treated in those lessons, and knows it better, than the student who has covered the same ground in the classroom.” Soon, he predicted, “the work done by correspondence will be greater in amount than that done in the class-rooms of our academies and colleges.”

Phonograph: In an 1878 article on “practical uses of the phonograph,” the New York Times predicted that the phonograph would be used “in the school-room in training children to read properly without the personal attention of the teacher; in teaching them to spell correctly, and in conveying any lesson to be acquired by study and memory. In short, a school may almost be conducted by machinery.”

Movies: “It is possible to teach every branch of human knowledge with the motion picture,” proclaimed Thomas Edison in 1913. “Our school system will be completely changed in 10 years.”

Radio: In 1927, the University of Iowa declared that “it is no imaginary dream to picture the school of tomorrow as an entirely different institution from that of today, because of the use of radio in teaching.”

TV: “During the 1950s and 1960s,” report education scholars Marvin Van Kekerix and James Andrews, “broadcast television was widely heralded as the technology that would revolutionize education.” In 1963, an official with the National University Extension Association wrote that television provided an “open door” to transfer “vigorous and vital learning” from campuses to homes.

Computers: “There won’t be schools in the future,” wrote MIT’s Seymour Papert in 1984. “I think the computer will blow up the school. That is, the school defined as something where there are classes, teachers running exams, people structured into groups by age, following a curriculum — all of that.”

Web 1.0: Internet-based education has itself gone through one boom-to-bust cycle already, with the e-learning fad of the late 1990s. In 1999, Cisco CEO John Chambers told the Times‘s Thomas Friedman, “The next big killer application for the Internet is going to be education. Education over the Internet is going to be so big, it’s going to make e-mail usage look like a rounding error.”

Home study programs, whether delivered through mailboxes or TVs, CD-ROMs or websites, have long played a very important role in expanding access to education and training. They’ve provided millions of people with valuable skills and perspectives that would otherwise have remained out of reach. But, despite more than a century of hope and hype, the technologies of distance learning have had surprisingly little effect on traditional schooling. Colleges, in particular, still look and work pretty much as they always have. Maybe that’s because the right technology hasn’t come along yet. Or maybe it’s because traditional classroom schooling, for all its flaws and inefficiencies, has strengths that we either don’t grasp or are quick to overlook. If it’s the former, then investing money and hopes in new technologies to revolutionize education makes sense. If it’s the latter, it would probably be wiser to identify and wrestle with the particular, and complicated, problems that beset traditional education rather than seeking magic bullets.

For those interested in the history of distance education, here are a couple of useful sources:

The Foundations of American Distance Education: A Century of Collegiate Correspondence Study, a 1991 compendium of articles edited by Barbara L. Watkins and Steven J. Wright.

Digital Diploma Mills: The Automation of Higher Education, by the late David F. Noble, published in 2003. Noble has an extensive chapter on the history of correspondence study, which builds on this 1999 essay.

MOOCs: what’s real, what’s hype

I have an article in the new issue of Technology Review, The Crisis in Higher Education, that looks at the phenomenon of massive open online courses, or MOOCs. The free delivery of college classes over the net is stirring a huge amount of excitement this fall, and the organizations pioneering  the MOOC model — notably, Coursera, Udacity, and edX — are grabbing a lot of attention and investment. But outsized expectations about revolutions in education have accompanied virtually every major new communication medium in the past, including the postal system, motion pictures, radio, TV, and personal computers. (In 1927, the University of Iowa declared that “it is no imaginary dream to picture the school of tomorrow as an entirely different institution from that of today, because of the use of radio in teaching.”) So is today different? Will the combination of breakthroughs in cloud computing, data mining, machine learning, and social networking at last enable distance learning to achieve its grand promise? That’s the question I wrestle with in the article.

Here’s how the piece begins:

A hundred years ago, higher education seemed on the verge of a technological revolution. The spread of a powerful new communication network — the modern postal system — had made it possible for universities to distribute their lessons beyond the bounds of their campuses. Anyone with a mailbox could enroll in a class. Frederick Jackson Turner, the famed University of Wisconsin historian, wrote of how the “machinery” of distance learning would carry “irrigating streams of education into the arid regions” of the country. Sensing a historic opportunity to reach new students and garner new revenues, schools rushed to set up correspondence divisions. By the 1920s, postal courses had become a full-blown mania. Four times as many people were taking them as were enrolled in all the nation’s colleges and universities combined.

The hopes for this early form of distance learning went well beyond the broadening of access. Many educators believed that correspondence courses, by allowing assignments and assessments to be tailored to each student, would actually be better than traditional on-campus instruction. The University of Chicago’s Home-Study Department, one of the nation’s largest, told prospective enrollees that they would “receive individual personal attention,” delivered “according to any personal schedule and in any place where postal service is available.” The department’s director claimed that correspondence study offered students an intimate “tutorial relationship” that “takes into account individual differences in learning.” The education, he said, would prove superior to that delivered in “the crowded classroom of the ordinary American University.”

We’re hearing strikingly similar claims today. Another powerful communication network—the Internet—is again raising hopes of a revolution in higher education. This fall, many of the country’s leading universities, including MIT, Harvard, Stanford, and Princeton, are offering classes for free over the Net, and more than a million people around the world have signed up to take them. These massive open online courses, or MOOCs, are earning praise for bringing outstanding college teaching to multitudes of students who otherwise wouldn’t have access to it, including those in remote places and those in the middle of their careers. The online classes are also being promoted as a way to bolster the quality and productivity of teaching in general — for students on campus as well as off. Former U.S. Secretary of Education William Bennett has written that he senses “an Athens-like renaissance” in the making. Stanford President John Hennessy told the New Yorker he sees “a tsunami coming.” …

Read on.

(Also check out The Library of Utopia, a somewhat related article I wrote for Tech Review earlier this year.)

Google Glass and Claude Glass

A hirsute Sergey Brin made waves earlier this month when he catwalked through New York Fashion Week with a Google Glass wrapped around his bean. It was something of a coming out party for Google’s reality-augmentation device, which promises to democratize the heads-up display, giving us all a fighter pilot’s view of the world. Diane von Furstenberg got Glassed. So did Sarah Jessica Parker. Wendi Murdock seemed impressed by the cyborgian accessory, as did her husband, Rupert, who promptly tweeted, “Genius!” Google Glass is shaping up to be the biggest thing to hit the human brow since Olivia Newton-John’s headband. Let’s get post-physical.

Google is developing a wearable smartphone screen as part of its 'Project Glass' initiative.

[Augmented Reality device, circa 2012]

It’s appropriate that models, designers, and other fashionistas would be among the first to embrace Glass. The fashion industry has always been at the forefront of reality augmentation. But Google’s eye extension is not the first Glassware to come into vogue. In the 18th and 19th centuries, the gadget of choice for trendsetters was the Claude Glass. Named after the popular French landscape painter Claude Lorraine, a Claude Glass was a tinted, convex mirror that ladies and gentlemen would carry around on outings and whip out whenever they wanted to amp up the beauty of a natural scene. As Leo Marx explained in his classic The Machine in the Garden, “When a viewer used the Claude Glass the landscape was transformed into a provisional work of art, framed and suffused by a golden tone like that of the master’s paintings.” The glass “helped create a pastoral illusion.” According to a University of Windsor site dedicated to this early mobile technology, a Claude Glass “essentially edited a natural scene, making its scale and diversity manageable, throwing its picturesque qualities into relief and — crucially — making it much easier to draw and record.”

[Augmented Reality device, circa 1780]

Where a Claude Glass suffused the landscape with a soft painterly light, a Google Glass saturates the landscape with hard data. It gives its owner the eyes not of an artist but of an analyst. Instead of a pastoral illusion, you get a digital illusion. But if the perspectives the two gadgets provide are radically different, the Claude Glass and the Google Glass share some important qualities. Both tell us that our senses are insufficient, that manufactured vision is superior to what our own meager eyes can show us. And both turn the world into a packaged good — a product to be consumed. The Google Glass is superior to the Claude Glass in this regard. Not only does it package an enhanced version of reality, but it annotates it with a profusion of descriptive text and other explanatory symbols — and then, with its camera and its uplinks to social networks, it allows us to share the product. With a Google Glass on our forehead, we’re not just a consumer of augmented reality; we’re a value-added reseller.

Like the Claude Glass before it, the Google Glass reverses the biblical prophecy: “For now we see through a glass, darkly; but then face to face.” Technology reveals the paradise beyond the real — or at least it reveals how we imagine that paradise to be.

[Images: Google Glass: Google promotional photo; Claude Glass: detail from Sophia Delaval, Mrs Jardis, attributed to Edward Alcock, from National Trust]

When links turn inward

I recently argued that the proliferation of self-links (like that one you just passed) on prominent sites and blogs is one of the factors that has made hyperlinks less useful in determining search results and, in general, in sorting, filtering, and evaluating the content of the web. A link is most valuable, in this regard, when it is an expression of an individual’s informed judgment — unbiased by self-interest — about the value and importance of content on the web. As many have pointed out, when links are used in this way they serve as “votes” about the quality or importance of web pages. But as the intent of links has shifted, particularly on popular commercial sites, away from expressing personal judgment and toward pursuing personal or institutional interest (boosting page views, ad impressions, or search rankings, for instance, or even simply using your own or your colleagues’ past work to provide context to readers), the utility of links as democratic votes on the value of web content has been reduced. That doesn’t mean that all self-links, or “internal links,” as they’re often called, are bad — they’re frequently appropriate and helpful — but when they are used routinely, as they tend to be today, they make links in general less valuable as a means of filtering and navigating the web.

A new paper by Mark Coddington, a journalism grad student at the University of Texas, shows just how dominant a practice internal linking is among top journalism sites. (I found the paper via a Poynter story, which I found via an Andrew Sullivan link.) Coddington examined the links in a sample of stories from three types of journalism sites: news sites run by big media companies, blogs written by journalists at big media companies (“j-blogs”), and popular independent news blogs. He found that 91% of the links in the stories on the news sites were internal links (pointing to other pages on the same site), and that 54% of the links in the j-blog stories were internally directed. In contrast, only 18% of the independent bloggers’ links were internal links. I’ve termed the practice of inward linking “nepotistic linking”; Sullivan uses the more memorable term “linksturbation.”

Coddington also measured the percentage of links in the stories that pointed to “mainstream media” sites. On the news sites, fully 93% of links go to mainstream media sites. For j-blogs, the figure is 77%, and for indies, the figure is 33%. As Coddington observes, the overwhelming tendency of mainstream media sites to link to mainstream media sites tends to concentrate authority on the web. Whereas links once served to broaden the conversation, they increasingly serve to narrow it. Coddington, who supplemented his analysis with interviews with journalists, notes that

those within news organizations overwhelmingly expressed philosophies of openness regarding the sources of their links. They placed very few restrictions on what types of sources they would link to, and they were emphatic about their willingness to link both outside of their news organizations, and outside of traditional media sources. As we have seen and will examine further, however, these linking philosophies have yet to be borne out in the actual linking practices of mainstream news organizations, particularly outside of their blogging content.

The sites are open in theory, but largely closed in practice. Blogs once provided a counterforce to such homogenization, but as personal blogs have been displaced by commercial ones (in terms of traffic) over the last seven years, the blogosphere has come to amplify the insularity effect. As Coddington’s study suggests, institutional bloggers are far more likely to link inwardly and to link to mainstream sites than are independent bloggers.

Although Coddington notes that “links can wield immense power to define the parameters of an online text and the Web itself,” the focus of his research is on journalism, in particular how links serve to shape news reporting and hence “frame” public perceptions. He sums up his findings this way:

Inside news organizations, a link is predominantly a tool for providing context, a largely internally directed reference for curious readers hoping to delve deeper into an issue in the news. It points primarily to undated sources and general pages, reaching outside of the day-to-day developments of a news story toward a general, static body of knowledge from which to draw a fuller sense of the environment in which the story is occurring. […] But the logic of this linking practice also circumscribes the frame of the news story, just as it contextualizes it. The body of knowledge to which a news organization’s links point is, by and large, accumulated by that news organization itself and others like it. This is consistent with previous findings that news organizations primarily link internally, and this practice also locates the nexus of online authority largely within the same institutions that constitute it offline.

Links carry not only meaning and context but also ideology. Where links were once celebrated for their ability to undermine old, centralized power structures and information sources, the way they are used today increasing seems to be reinforcing those structures and sources. As they become more exclusive and less inclusive, links themselves turn into a mainstream medium.