Monthly Archives: October 2011

Utopia is creepy

Works of science fiction, particularly good ones, are almost always dystopian. It’s easy to understand why: There’s a lot of drama in Hell, but Heaven is, by definition, conflict-free. Happiness is nice to experience, but seen from the outside it’s pretty dull.

But there’s another reason why portrayals of utopia don’t work. We’ve all experienced the “uncanny valley” that makes it difficult to watch robotic or avatarial replicas of human beings without feeling creeped out. The uncanny valley also exists, I think, when it comes to viewing artistic renderings of a future paradise. Utopia is creepy – or at least it looks creepy. That’s probably because utopia requires its residents to behave like robots, never displaying or even feeling fear or anger or jealousy or bitterness or any of those other messy emotions that plague our fallen world.

I’ve noticed the arrival recently of a new genre of futuristic YouTube videos. They’re created by tech companies for marketing or brand-burnishing purposes. With the flawless production values that only a cash-engorged balance sheet can buy you, they portray a not-too-distant future populated by exceedingly well-groomed people who spend their hyperproductive days going from one screen to the next. (As seems always to be the case with utopias, the atmosphere is very post-sexual.) The productions are intended to present us with visions of technological Edens, but they end up doing the exact opposite: portraying a future world that feels cold, mechanical, and repellent. And the creepiness is only intensified by the similarities between the future they conjure up and the present that we live in.

The latest in this genre comes from Microsoft, and like its predecessors it seems to be the product of a collaboration between Stanley Kubrick and David Lynch. Make sure you watch it with the sound on, because the music in these videos is always richly creepy in itself:

I love the title of this video: Productivity Future Vision (2011). It’s so evocative.

Minds askew

Iain McGilchrist, the psychiatrist and former English professor whose 2009 book on the human brain, The Master and His Emissary, is endlessly fascinating, discusses his ideas on the meaning of the brain’s hemispherical divide in this wonderful animation:

That helps explain, among many other things, why we’re so drawn to the metaphor that portrays the brain as a computer.

Retransmission of a language-based practice

Penn prof Kenneth Goldsmith has seen the future of culture, and it’s a content farm:

For the past several years, I’ve taught a class at the University of Pennsylvania called “Uncreative Writing.” In it, students are penalized for showing any shred of originality and creativity. Instead they are rewarded for plagiarism, identity theft, repurposing papers, patchwriting, sampling, plundering, and stealing. Not surprisingly, they thrive. Suddenly what they’ve surreptitiously become expert at is brought out into the open and explored in a safe environment, reframed in terms of responsibility instead of recklessness.

We retype documents and transcribe audio clips. We make small changes to Wikipedia pages (changing an “a” to “an” or inserting an extra space between words). We hold classes in chat rooms, and entire semesters are spent exclusively in Second Life. Each semester, for their final paper, I have them purchase a term paper from an online paper mill and sign their name to it, surely the most forbidden action in all of academia. Students then must get up and present the paper to the class as if they wrote it themselves, defending it from attacks by the other students. What paper did they choose? Is it possible to defend something you didn’t write? Something, perhaps, you don’t agree with? Convince us.

All this, of course, is technology-driven. When the students arrive in class, they are told that they must have their laptops open and connected. And so we have a glimpse into the future … While the author won’t die, we might begin to view authorship in a more conceptual way: Perhaps the best authors of the future will be ones who can write the best programs with which to manipulate, parse, and distribute language-based practices. Even if, as Christian Bök claims, poetry in the future will be written by machines for other machines to read, there will be, for the foreseeable future, someone behind the curtain inventing those drones, so that even if literature is reducible to mere code — an intriguing idea — the smartest minds behind the machines will be considered our greatest authors.

Bondi Blue

I kid you not, in the late 1990s I actually paid good cash money for a Macintosh computer that looked like this:

556px-Power_Mac_G3_AIO_corrected.jpg

It was as ugly as Steve Ballmer’s ass. It weighed a million pounds. It was referred to as “The Molar.” That was not a term of endearment.

Apple Computer was dead. It was not “nearly dead,” as you’ll hear some say today. It was doornail dead. It was laid out on a slab in a Silicon Valley morgue, a tag hanging from its toe. Scott McNealy could have purchased the remains for an amount more or less equal to what he was spending on greens fees every month, but at the last moment he came to his senses and put his checkbook back into his pocket. The few pathetic fanboys left on the planet – myself among them – knew when we bought a new Mac that what we were really buying was a memento mori.

In 1996, the Apple board exercised the only option it had left, short of outright dissolution: it paid Steve Jobs to come back and take possession of the corpse. But it wasn’t until two years later that Apple released the first product of the second Jobs era. It was a quirky little computer called an iMac. It didn’t look anything like any other PC on the market. It had an outdated all-in-one design; pretty much every other desktop computer on the market had a CPU box and a separate monitor. It was egg-shaped, not square. It lacked a floppy drive. It had some sort of weird new port called USB. Most unusual of all, it was colorful. It was blue, not the ubiquitous beige of the typical PC box. And the color wasn’t just any run-of-the-mill blue. It wasn’t navy blue or sky blue or baby blue. The color of the computer actually had a name. It was called, Jobs let it be known, bondi blue.

218px-IMac_Bondi_Blue.jpg

Now, nobody in America knew what the hell a bondi was. We didn’t even know how you were supposed to pronounce it. Was it bond-ee blue, or was it bond-eye blue? But we were soon to learn that the color took its name from Bondi Beach, a long, curving strand just east of Sydney, Australia, popular with sunbathers and surfers. The color of the computer was intended to replicate the color of the sea in that particular part of the world.

Who other than Jobs would have thought, in the middle of the Age of Beige, to fabricate a computer in the color of the ocean off some beach in Australia? It was nuts. But the original iMac was the electroshock that kickstarted Apple’s heart. By no means did it secure the company’s future, and compared to the industry-altering products that were to follow – iPod, iTunes, iPhone, App Store, iPad – it seems like a fairly trivial product today. But it attracted a lot attention and, even more important, it bought Jobs time. It’s fair to say that, had it not been for the bondi blue iMac, those later products would never have appeared, at least not in their Jobsian form. Apple would have stayed dead, and Jobs would have probably headed off to be a player in Hollywood, maybe even the CEO of Disney.

All the other PC makers back then basically saw their computers as industrial tools. What they cared about – and what most buyers had been told to care about – was the specs of the innards, things like chip speed and hard drive capacity. Jobs sensed that there was in fact a set of computer buyers who might actually want a computer that was the color of the ocean off the coast of Australia – and not only that, but they that might well enjoy forking out a little extra money for the privilege of owning such a computer. A computer, Jobs saw, wasn’t just a tool. It was a fashion accessory. And as the guts of PCs continued to turn into commodities, his instinct was confirmed: it was the outside of the PC – the shape of it, the color of it, the look and feel of it – that came to matter. His insight resurrected Apple and killed the beige box.

Some years after the introduction of that first iMac, I had the opportunity to travel to Sydney, and the fanboy in me demanded that I make a pilgrimage to Bondi Beach. It was a chilly, overcast day, and other than a few joggers and maybe a surfer or two the beach was deserted. I took a picture:

bondi.jpg

Look at the color of the water where it hits the beach. That’s bondi blue. It may well have been Steve Jobs’s greatest invention.

Overselling educational software

Tomorrow’s New York Times carries the second installment in the paper’s series “Grading the Digital School.” Like the first installment, this one finds little solid evidence that popular, expensive computer-aided instruction programs actually benefit students. The focus of the new article, written by Trip Gabriel and Matt Richtel, is Cognitive Tutor, a widely esteemed and much coveted software program for teaching math in high schools. The software was developed by Carnegie Learning, a company founded by Carnegie Mellon professors and now owned by Apollo Group, the same company that owns the University of Phoenix.

Carnegie Learning promotes its software as producing “revolutionary results.” It is widely used, and has been applauded by respected thinkers like the Harvard Business School’s Clayton Christensen, who in an article published by the Atlantic two weeks ago used Carnegie Learning as the poster child for the power of software-based education:

Carnegie Learning is the creation of computer and cognitive scientists from Carnegie Mellon University. Their math tutorials draw from cutting-edge research about the way students learn and what motivates them to succeed academically. These scientists have created adaptive computer tutorials that meet students at their individual level of understanding and help them advance via the kinds of exercises they personally find most engaging and effective. The personalization and sophistication is hard for even an expert human tutor to match. It is a powerful, affordable adjunct to classroom instruction, as manifest by Carnegie Learner’s [sic] user base of more than 600,000 secondary students in over 3,000 schools nationwide.

Sounds terrific. But, as the Times story documents, the evidence for Cognitive Tutor’s benefits is weak. In 2010, the U.S. Department of Education analyzed two dozen studies of the software program, and found that it “had no discernible effects” on math test scores for high school students. Another federal study, conducted a year earlier, examined ten leading software programs for teaching math, including Cognitive Tutor, and concluded that they had no “statistically significant effects on test scores.”

Test scores aren’t everything, of course, but it’s fair to say that one of the main reasons cash-strapped schools invest in the computer-aided programs, which can cost three times as much as traditional textbooks, is to boost students’ test scores. And companies like Carnegie Learning use the promise of improved test scores as a prime marketing pitch, sometimes backing it up with cherry-picked case studies or skewed research reports. Certainly, computer-aided instruction programs have a place in schools, but it’s increasingly clear that the benefits of the software have been oversold, and the faith that many educators place in the programs is often unwarranted. As Gabriel and Richtel report: “School officials, confronted with a morass of complicated and sometimes conflicting research, often buy products based on personal impressions, marketing hype or faith in technology for its own sake.”

Back in the early days of the personal computer, the late Steve Jobs, during his original stint at the helm of Apple Computer, played a major role in promoting the use of computers in education. But in a 1996 Wired interview, conducted after he’d left Apple and before he was rehired, he expressed a very different and much more wary view of the role of computer technology in schools:

I used to think that technology could help education. I’ve probably spearheaded giving away more computer equipment to schools than anybody else on the planet. But I’ve had to come to the inevitable conclusion that the problem is not one that technology can hope to solve. What’s wrong with education cannot be fixed with technology. No amount of technology will make a dent … Lincoln did not have a Web site at the log cabin where his parents home-schooled him, and he turned out pretty interesting. Historical precedent shows that we can turn out amazing human beings without technology. Precedent also shows that we can turn out very uninteresting human beings with technology. It’s not as simple as you think when you’re in your 20s – that technology’s going to change the world. In some ways it will, in some ways it won’t.

His view still seems pretty much on the mark.