I’m pleased to report that Booklist, the magazine published by the American Library Association, has named The Big Switch one of the top ten business books of 2008.
Googling and intelligence
Earlier this week, UCLA’s Memory and Aging Research Center released a summary of the results of a study of the effects of Internet searching on brain activity, timed to coincide with the release of a new book, iBrain, by the center’s director, Gary Small. In the study, Small and his team used functional magnetic resonance imaging (fMRI) to track the blood flows in the brains of 24 middle-aged and elderly volunteers as they either searched the web or read books. When the test subjects read books, they displayed, as would be expected, significant brain activity in “the regions controlling language, reading, memory and visual abilities.” When the subjects searched the Web, those who already had experience using the Net also displayed considerable activity in the brain regions that “control decision-making and complex reasoning.” (Those without Net experience displayed much less activity in those regions.)
In a great example of the kind of knee-jerk mental response that often characterizes high-speed media, a number of blogs and other media outlets seized on the study as evidence that the Net is “making us smarter.” The findings were portrayed as a counterweight to my recent article in the Atlantic, “Is Google Making Us Stupid?,” which argued that the Internet may be eroding our capacity for deep and concentrated thought. Wired’s Epicenter blog, for instance, brayed, “All that talk about how Google is making us stupid is a bit of a crock, according to a new study from UCLA researchers.” The Epicenter headline: “Google Makes You Smart.”
Not quite.
I’m thrilled, first of all, that brain researchers are beginning to explore the cognitive consequences of Internet use, and I look forward to reading Small’s full report on his study when it is published in the American Journal of Geriatric Psychiatry. This study, and the many others that are sure to follow, will begin to give us a picture of what happens when our brains adapt to the Web and its distinctive style of transmitting and displaying information. But this picture will necessarily develop slowly and fuzzily. FMRI scans have been a godsend to brain researchers, but the evidence they present is often imprecise. Blood flows in the brain tell us much about what the brain is doing but very little about the quality of thought that results. And when we’re talking about intelligence, it’s the quality of thought that matters.
It’s good to know that older people can, apparently, get some brain exercise through googling – and that that may help them maintain their mental acuity. But to leap from observing that many areas of the brain are activated when searching the Net to the contention that searching the Net makes us more intelligent is like saying that doing pushups improves our carpentry skills. I would guess that you’d see similarly broad brain activity patterns in, say, people playing Pac-man. Does that mean that Pac-man makes us more intelligent? No, it just means that playing Pacman involves many brain circuits.
The Freakonomics blog had a good take on the study:
Small’s team found that experienced web users experience increased stimulation in the regions of their brains that handle complex reasoning and decision making. The activity was more widespread than when the same subjects were reading a book, or when inexperienced web users surfed the internet. In other words, being able to tease out useful information from all the chaff on the internet can be as intellectually demanding a task as completing a crossword puzzle. But is puzzle solving the same kind of “smartness” as the “smartness” that comes from reading a book?
Indeed, I wonder whether the fact that more brain regions are in simultaneous use during web use than during reading doesn’t illustrate (among other things) that concentrated thought becomes more difficult to maintain when reading online than when reading a printed work. Is the relative breadth of brain activity discovered by Small and his colleagues also a map of distraction?
Gary Small wrote a letter to the Atlantic in response to my article. “Nicholas Carr correctly notes that technology is changing our lives and our brains,” he said, continuing:
The average young person spends more than eight hours each day using technology (computers, PDAs, TV, videos), and much less time engaging in direct social contact. Our UCLA brain-scanning studies are showing that such repeated exposure to technology alters brain circuitry, and young developing brains (which usually have the greatest exposure) are the most vulnerable … More than 300,000 years ago, our Neanderthal ancestors discovered handheld tools, which led to the co-evolution of language, goal-directed behavior, social networking, and accelerated development of the frontal lobe, which controls these functions. Today, video-game brain, Internet addiction, and other technology side effects appear to be suppressing frontal-lobe executive skills and our ability to communicate face-to-face. Instead, our brains are developing circuitry for online social networking and are adapting to a new multitasking technology culture.
What Small’s work shows us, above all else, is that Internet use does alter the functioning of our brains, changing how we think and even who we are. We are googling our way, compulsively, to a new mind.
Surface tensions
In the new issue of the Atlantic, veteran blogger Andrew Sullivan writes a thoughtful and generous paean to blogging, which he calls – and he means it more as compliment than as criticism – “a superficial medium”:
By superficial, I mean simply that blogging rewards brevity and immediacy. No one wants to read a 9,000-word treatise online. On the Web, one-sentence links are as legitimate as thousand-word diatribes—in fact, they are often valued more. And, as Matt Drudge told me when I sought advice from the master in 2001, the key to understanding a blog is to realize that it’s a broadcast, not a publication. If it stops moving, it dies. If it stops paddling, it sinks.
But the superficiality masked considerable depth—greater depth, from one perspective, than the traditional media could offer. The reason was a single technological innovation: the hyperlink. An old-school columnist can write 800 brilliant words analyzing or commenting on, say, a new think-tank report or scientific survey. But in reading it on paper, you have to take the columnist’s presentation of the material on faith, or be convinced by a brief quotation (which can always be misleading out of context). Online, a hyperlink to the original source transforms the experience. Yes, a few sentences of bloggy spin may not be as satisfying as a full column, but the ability to read the primary material instantly—in as careful or shallow a fashion as you choose—can add much greater context than anything on paper …
A blog, therefore, bobs on the surface of the ocean but has its anchorage in waters deeper than those print media is technologically able to exploit. It disempowers the writer to that extent, of course. The blogger can get away with less and afford fewer pretensions of authority. He is—more than any writer of the past—a node among other nodes, connected but unfinished without the links and the comments and the track-backs that make the blogosphere, at its best, a conversation, rather than a production.
He goes on to reflect on the downside of blogging’s essential superficiality: its “failure to provide stable truth or a permanent perspective”:
A traditional writer is valued by readers precisely because they trust him to have thought long and hard about a subject, given it time to evolve in his head, and composed a piece of writing that is worth their time to read at length and to ponder. Bloggers don’t do this and cannot do this—and that limits them far more than it does traditional long-form writing.
A blogger will air a variety of thoughts or facts on any subject in no particular order other than that dictated by the passing of time. A writer will instead use time, synthesizing these thoughts, ordering them, weighing which points count more than others, seeing how his views evolved in the writing process itself, and responding to an editor’s perusal of a draft or two. The result is almost always more measured, more satisfying, and more enduring than a blizzard of posts. The triumphalist notion that blogging should somehow replace traditional writing is as foolish as it is pernicious. In some ways, blogging’s gifts to our discourse make the skills of a good traditional writer much more valuable, not less. The torrent of blogospheric insights, ideas, and arguments places a greater premium on the person who can finally make sense of it all, turning it into something more solid, and lasting, and rewarding.
Well put.
Candid camera
Here’s a nice snapshot of the expansiveness of today’s web: Facebook has announced that it now stores 10 billion photographs uploaded by its members (as noted by Data Center Knowledge). Moreover, since it stores each photo in four different sizes, it actually has 40 billion image files in its system. More than 15 billion photos are viewed at the site everyday, and at times of peak demand 300,000 images are viewed every second. An additional two or three terabytes of photos are uploaded every day.
I did a quick scan of the 10 billion photos and found that 3 billion of them included an image of beer, in keg, can, bottle, or pitcher form, 1.5 billion included an image of a bra, and 675 million included both beer and a bra.
Almost human
In the final round of competition for this year’s Loebner Prize in artificial intelligence, held today at the University of Reading in the UK, a robot came within a whisker of passing the Turing Test. In a series conversations with people, the winning robot, named Elbot, fooled 25% of its interlocutors into believing it was a genuine human being. A score of 30% would have been sufficient to pass Turing’s criterion for a true artificial intelligence.
Five machines competed in today’s finals, and each of them managed to convince at least one person that it was human. All the robots, moreover, received strikingly high scores for their conversational skills. Reports Kevin Warwick, a professor in the University of Reading’s School of Systems Engineering and the organizer of the test:
This has been a very exciting day with two of the machines getting very close to passing the Turing Test for the first time. In hosting the competition here, we wanted to raise the bar in Artificial Intelligence and although the machines aren’t yet good enough to fool all of the people all of the time, they are certainly at the stage of fooling some of the people some of the time.
Today’s results actually show a more complex story than a straight pass or fail by one machine. Where the machines were identified correctly by the human interrogators as machines, the conversational abilities of each machine was scored at 80 and 90%. This demonstrates how close machines are getting to reaching the milestone of communicating with us in a way in which we are comfortable. That eventual day will herald a new phase in our relationship with machines, bringing closer the time in which robots start to play an active role in our daily lives.
Here’s how Elbot opened a conversation with one of its human interrogators:
Judge: “Hi. How’s it going?”
Elbot: “I feel terrible today. This morning I made a mistake and poured milk over my breakfast instead of oil, and it rusted before I could eat it.”
Hah! The old robot-pretending-to-be-a-person-pretending-to-be-a-robot ruse. Elbot, you are the man!
UPDATE: You can chew the fat with Elbot here.
UPDATE: The Guardian’s Esther Addley is unimpressed.
No worries
Congress’s new and improved bailout bill includes, along with the obligatory helpings of pork, a provision that would increase FDIC bank deposit insurance from $100,000 to $250,000. Should the House pass the bill today, that boost in insurance would let a lot of Americans sleep a lot easier.
And that’s the problem.
When the government provides free insurance for an investment – any investment – it removes considerations of risk from investors’ decisions and, as a result, it distorts financial markets. In the worst-case scenario, that contributes to the kind of craziness that has put the world economy on a precipice. As Floyd Norris writes in the New York Times today:
As the ideas fly for saving the financial system, it is amazing — and appalling — how many of them seem to be straight out of the playbook from the savings and loan crisis. Then, as now, Congress decided to reassure investors by more than doubling the amount of deposits that could be insured … The raising of the deposit guarantee limits in 1980 to $100,000, from $40,000, made depositors less concerned about the health of their institution, and made it easier for dying institutions to attract deposits. Raising the figure to $250,000 now could have the same effect.
I’m not suggesting that insuring bank deposits is a bad thing. An even worse worst-case scenario is the kind of panic that leads to general runs on banks. What I don’t understand is why the insurance is set at a full 100% rather than, say, 80% or even 90%. By putting some fraction of deposits at risk, you’d at least provide a little incentive for people to be conscious of the health of the bank in which they’re putting their money, which in turn would put some additional pressure on banks to temper the risks they take.
Even if Congress had kept the full insurance on the first $100,000 of deposits and then provided 80% insurance for the next $150,000 of deposits, it would have injected a little more rationality into personal banking decisions. When it comes to money, panic is bad but nervousness is good.
Here comes the “Windows Cloud”
Amazon and Microsoft are about to be partners – and competitors.
Last night, Amazon’s Werner Vogels announced that later this fall developers and companies will be able to run Microsoft Windows Server and SQL Server on the Amazon Elastic Compute Cloud (EC2), which up to now has been limited to Linux or other Unix-based systems. Given the broad popularity of the Microsoft operating system, the move promises to considerably expand the usefulness of the EC2 utility-computing system. According to Amazon:
Amazon EC2 running Windows Server or SQL Server provides an ideal environment for deploying ASP.NET web sites, high performance computing clusters, media transcoding solutions, and many other Windows-based applications. By choosing Amazon EC2 as the deployment environment for your Windows-based applications, you will be able to take advantage of Amazon’s proven scalability and reliability, as well as the cost-effective, pay-as-you-go pricing model offered by Amazon Web Services.
As Vogels notes, it will also become possible to run virtual Windows desktops from Amazon’s cloud.
Details about pricing have yet to be released. The big question, as Alan Williams notes, is this: Will Microsoft adopt a true utility pricing model for virtual computers running Windows, allowing Amazon to roll the operating system licensing cost into its hourly fee, or will the Windows licenses have to continue to be purchased separately? If it’s the former, Microsoft will have made a significant step forward into the utility world.
But an even bigger step into the cloud appears imminent. Microsoft CEO Steve Ballmer announced in London today that the company will unveil its own “cloud operating system” at its big developer conference at the end of this month. According to The Register, Ballmer said: “We need a new operating system designed for the cloud and we will introduce one in about four weeks, we’ll even have a name to give you by then. But let’s just call it for the purposes of today ‘Windows Cloud.'” Ballmer also said: “The last thing we want is for somebody else to obsolete us; if we’re gonna get obsoleted, we better do it to ourselves.” Even as it links up with Amazon Web Services, Microsoft is preparing to muscle onto its turf.