Category Archives: Uncategorized

The centripetal web

“A centripetal force is that by which bodies are drawn or impelled, or any way tend, towards a point as to a center.” -Isaac Newton

When I started blogging, back in the spring of 2005, I would visit Technorati, the blog search engine, several times a day, both to monitor mentions of my own blog and to track discussions on subjects I was interested in writing about. But over the last year or so my blog-searching behavior has changed. I started using Google Blog Search to supplement Technorati, and then, without even thinking about it really, I began using Google Blog Search pretty much exclusively. At this point, I can’t even remember the last time I visited the Technorati site. Honestly, I don’t even know if it’s still around. (OK, I just checked: it’s still there.)

Technorati’s technical glitches were part of the reason for the change in my behavior. Even though Technorati offered more precise tools for searching the blogosphere, it was often slow to return results, or it would just fail outright. When it came to handling large amounts of traffic, Technorati just couldn’t compete with Google’s resources. But it wasn’t just a matter of responsiveness and reliability. As a web-services conglomerate, Google made it easy to enter one keyword and then do a series of different searches from its site. By clicking on the links to various search engines that Google conveniently arrays across the top of every results page, I could search the web, then search news stories, then search blogs, then (if I was really ambitious) search scholarly papers. Google offered the path of least resistance, and I happily took it.

I thought of this today as I read, on Techcrunch, a report that people seem to be abandoning Bloglines, the popular online feed reader, and that many of them are coming to use Google Reader instead. The impetus, again, seems to be a mix of frustration with Bloglines’ glitches and the availability of a decent and convenient alternative operated by the giant Google. The first few comments on the Techcrunch post are revealing:

“switching temporary (?) to google reader, bloglines currently sucks too much”

“I got so fed up with bloglines’ quirks that I switched over to Google Reader and haven’t looked back”

“I’ve finally abandoned Bloglines for the Google Reader”

“Farewell, dear Bloglines. I loved you, but I’m going over to the dark side. I don’t love Google Reader, but at least I can get my feeds”

“Bloglines, please stop sucking. It’s been a couple of weeks now. I don’t want to have to move to Google Reader. Sigh.”

“Thanks for the tip about exporting feeds to Google Reader. I made the transition too. Goodbye Bloglines.”

During the 1990s, when the World Wide Web was bright and shiny and new, it exerted a strong centrifugal force on us. It pulled us out of the orbit of big, central media outlets and sent us skittering to the outskirts of the info-universe. Early web directories like Yahoo and early search engines like Altavista, whatever their shortcomings (perhaps because of their shortcomings), led us to personal web pages and other small, obscure, and often oddball sources of information. The earliest web loggers, too, took pride in ferreting out and publicizing far-flung sites. And, of course, the big media outlets were slow to move to the web, so their gravitational fields remained weak or nonexistent online. For a time, the web had no mainstream; there were just brooks and creeks and rills and the occasional beaver pond.

And that landscape felt not only new but liberating. Those were the days when you could look around and easily convince yourself that the web would always be resistant to centralization, that it had leveled the media playing field for good. But that view was an illusion. Even back then, the counterforce to the web’s centrifugal force – the centripetal force that would draw us back toward big, central information stores – was building. Hyperlinks were creating feedback loops that served to amplify the popularity of popular sites, feedback loops that would become massively more powerful when modern search engines, like Google, began to rank pages on the basis of links and traffic and other measures of popularity. Navigational tools that used to emphasize ephemera began to filter it out. Roads out began to curve back in.

At the same time, and for related reasons, scale began to matter. A lot. Big media outlets moved online, creating vast, enticing pools of branded content. Search engines and content aggregators, like Google, expanded explosively, providing them with the money and expertise to create technical advantages – in speed, reliability, convenience, and so on – that often proved decisive in attracting and holding consumers. And, of course, people began to demonstrate their innate laziness, retreating from the wilds and following the increasingly well-worn paths of least resistance. A Google search may turn up thousands of results, but few of us bother to scroll beyond the top three. When convenience meets curiosity, convenience usually wins.

Wikipedia provides a great example of the formative power of the web’s centripetal force. The popular online encyclopedia is less the “sum” of human knowledge (a ridiculous idea to begin with) than the black hole of human knowledge. At heart a vast exercise in cut-and-paste paraphrasing (it explicitly bans original thinking), Wikipedia first sucks in content from other sites, then it sucks in links, then it sucks in search results, then it sucks in readers. One of the untold stories of Wikipedia is the way it has siphoned traffic from small, specialist sites, even though those sites often have better information about the topics they cover. Wikipedia articles have become the default external link for many creators of web content, not because Wikipedia is the best source but because it’s the best-known source and, generally, it’s “good enough.” Wikipedia is the lazy man’s link, and we’re all lazy men, except for those of us who are lazy women.

Now, it’s true, as well, that Wikipedia provides some centrifugal force, by including links to sources and related works at the foot of each article. To its credit, it’s an imperfect black hole. But compared to the incredible power of its centripetal force, magnified by search engine feedback loops and link-laziness, its centrifugal force is weak and getting weaker. Which is, increasingly, the defining dynamic of the web as a whole. The web’s centrifugal force hasn’t gone away – it’s there in the deliberately catholic linking of a Jason Kottke or a Slashdot, say, or in a list of search results arranged by date rather than by “relevance” – but it’s far less potent than the centripetal force, particularly when those opposing forces play out at the vastness of web scale where even small advantages have enormous effects as they ripple through billions of transactions. Yes, we still journey out to the far reaches of the still-expanding info-universe, but for most of us, most of the time, the World Wide Web has become a small and comfortable place. Indeed, statistics indicate that web traffic is becoming more concentrated at the largest sites, even as the overall number of sites continues to increase, and one recent study found that as people’s use of the web increases, they become “more likely to concentrate most of their online activities on a small set of core, anchoring Websites.”

Chris Anderson’s “long tail” remains an elegant and instructive theory, but it already feels dated, a description of the web as we once imagined it to be rather than as it is. The long tail is still there, of course, but far from wagging the web-dog, it’s taken on the look of a vestigial organ. Chop it off, and most people would hardly notice the difference. On the web as off it, things gravitate toward large objects. The center holds.

Googling and intelligence

Earlier this week, UCLA’s Memory and Aging Research Center released a summary of the results of a study of the effects of Internet searching on brain activity, timed to coincide with the release of a new book, iBrain, by the center’s director, Gary Small. In the study, Small and his team used functional magnetic resonance imaging (fMRI) to track the blood flows in the brains of 24 middle-aged and elderly volunteers as they either searched the web or read books. When the test subjects read books, they displayed, as would be expected, significant brain activity in “the regions controlling language, reading, memory and visual abilities.” When the subjects searched the Web, those who already had experience using the Net also displayed considerable activity in the brain regions that “control decision-making and complex reasoning.” (Those without Net experience displayed much less activity in those regions.)

In a great example of the kind of knee-jerk mental response that often characterizes high-speed media, a number of blogs and other media outlets seized on the study as evidence that the Net is “making us smarter.” The findings were portrayed as a counterweight to my recent article in the Atlantic, “Is Google Making Us Stupid?,” which argued that the Internet may be eroding our capacity for deep and concentrated thought. Wired’s Epicenter blog, for instance, brayed, “All that talk about how Google is making us stupid is a bit of a crock, according to a new study from UCLA researchers.” The Epicenter headline: “Google Makes You Smart.”

Not quite.

I’m thrilled, first of all, that brain researchers are beginning to explore the cognitive consequences of Internet use, and I look forward to reading Small’s full report on his study when it is published in the American Journal of Geriatric Psychiatry. This study, and the many others that are sure to follow, will begin to give us a picture of what happens when our brains adapt to the Web and its distinctive style of transmitting and displaying information. But this picture will necessarily develop slowly and fuzzily. FMRI scans have been a godsend to brain researchers, but the evidence they present is often imprecise. Blood flows in the brain tell us much about what the brain is doing but very little about the quality of thought that results. And when we’re talking about intelligence, it’s the quality of thought that matters.

It’s good to know that older people can, apparently, get some brain exercise through googling – and that that may help them maintain their mental acuity. But to leap from observing that many areas of the brain are activated when searching the Net to the contention that searching the Net makes us more intelligent is like saying that doing pushups improves our carpentry skills. I would guess that you’d see similarly broad brain activity patterns in, say, people playing Pac-man. Does that mean that Pac-man makes us more intelligent? No, it just means that playing Pacman involves many brain circuits.

The Freakonomics blog had a good take on the study:

Small’s team found that experienced web users experience increased stimulation in the regions of their brains that handle complex reasoning and decision making. The activity was more widespread than when the same subjects were reading a book, or when inexperienced web users surfed the internet. In other words, being able to tease out useful information from all the chaff on the internet can be as intellectually demanding a task as completing a crossword puzzle. But is puzzle solving the same kind of “smartness” as the “smartness” that comes from reading a book?

Indeed, I wonder whether the fact that more brain regions are in simultaneous use during web use than during reading doesn’t illustrate (among other things) that concentrated thought becomes more difficult to maintain when reading online than when reading a printed work. Is the relative breadth of brain activity discovered by Small and his colleagues also a map of distraction?

Gary Small wrote a letter to the Atlantic in response to my article. “Nicholas Carr correctly notes that technology is changing our lives and our brains,” he said, continuing:

The average young person spends more than eight hours each day using technology (computers, PDAs, TV, videos), and much less time engaging in direct social contact. Our UCLA brain-scanning studies are showing that such repeated exposure to technology alters brain circuitry, and young developing brains (which usually have the greatest exposure) are the most vulnerable … More than 300,000 years ago, our Neanderthal ancestors discovered handheld tools, which led to the co-evolution of language, goal-directed behavior, social networking, and accelerated development of the frontal lobe, which controls these functions. Today, video-game brain, Internet addiction, and other technology side effects appear to be suppressing frontal-lobe executive skills and our ability to communicate face-to-face. Instead, our brains are developing circuitry for online social networking and are adapting to a new multitasking technology culture.

What Small’s work shows us, above all else, is that Internet use does alter the functioning of our brains, changing how we think and even who we are. We are googling our way, compulsively, to a new mind.

Surface tensions

In the new issue of the Atlantic, veteran blogger Andrew Sullivan writes a thoughtful and generous paean to blogging, which he calls – and he means it more as compliment than as criticism – “a superficial medium”:

By superficial, I mean simply that blogging rewards brevity and immediacy. No one wants to read a 9,000-word treatise online. On the Web, one-sentence links are as legitimate as thousand-word diatribes—in fact, they are often valued more. And, as Matt Drudge told me when I sought advice from the master in 2001, the key to understanding a blog is to realize that it’s a broadcast, not a publication. If it stops moving, it dies. If it stops paddling, it sinks.

But the superficiality masked considerable depth—greater depth, from one perspective, than the traditional media could offer. The reason was a single technological innovation: the hyperlink. An old-school columnist can write 800 brilliant words analyzing or commenting on, say, a new think-tank report or scientific survey. But in reading it on paper, you have to take the columnist’s presentation of the material on faith, or be convinced by a brief quotation (which can always be misleading out of context). Online, a hyperlink to the original source transforms the experience. Yes, a few sentences of bloggy spin may not be as satisfying as a full column, but the ability to read the primary material instantly—in as careful or shallow a fashion as you choose—can add much greater context than anything on paper …

A blog, therefore, bobs on the surface of the ocean but has its anchorage in waters deeper than those print media is technologically able to exploit. It disempowers the writer to that extent, of course. The blogger can get away with less and afford fewer pretensions of authority. He is—more than any writer of the past—a node among other nodes, connected but unfinished without the links and the comments and the track-backs that make the blogosphere, at its best, a conversation, rather than a production.

He goes on to reflect on the downside of blogging’s essential superficiality: its “failure to provide stable truth or a permanent perspective”:

A traditional writer is valued by readers precisely because they trust him to have thought long and hard about a subject, given it time to evolve in his head, and composed a piece of writing that is worth their time to read at length and to ponder. Bloggers don’t do this and cannot do this—and that limits them far more than it does traditional long-form writing.

A blogger will air a variety of thoughts or facts on any subject in no particular order other than that dictated by the passing of time. A writer will instead use time, synthesizing these thoughts, ordering them, weighing which points count more than others, seeing how his views evolved in the writing process itself, and responding to an editor’s perusal of a draft or two. The result is almost always more measured, more satisfying, and more enduring than a blizzard of posts. The triumphalist notion that blogging should somehow replace traditional writing is as foolish as it is pernicious. In some ways, blogging’s gifts to our discourse make the skills of a good traditional writer much more valuable, not less. The torrent of blogospheric insights, ideas, and arguments places a greater premium on the person who can finally make sense of it all, turning it into something more solid, and lasting, and rewarding.

Well put.

Candid camera

Here’s a nice snapshot of the expansiveness of today’s web: Facebook has announced that it now stores 10 billion photographs uploaded by its members (as noted by Data Center Knowledge). Moreover, since it stores each photo in four different sizes, it actually has 40 billion image files in its system. More than 15 billion photos are viewed at the site everyday, and at times of peak demand 300,000 images are viewed every second. An additional two or three terabytes of photos are uploaded every day.

I did a quick scan of the 10 billion photos and found that 3 billion of them included an image of beer, in keg, can, bottle, or pitcher form, 1.5 billion included an image of a bra, and 675 million included both beer and a bra.

Almost human

In the final round of competition for this year’s Loebner Prize in artificial intelligence, held today at the University of Reading in the UK, a robot came within a whisker of passing the Turing Test. In a series conversations with people, the winning robot, named Elbot, fooled 25% of its interlocutors into believing it was a genuine human being. A score of 30% would have been sufficient to pass Turing’s criterion for a true artificial intelligence.

Five machines competed in today’s finals, and each of them managed to convince at least one person that it was human. All the robots, moreover, received strikingly high scores for their conversational skills. Reports Kevin Warwick, a professor in the University of Reading’s School of Systems Engineering and the organizer of the test:

This has been a very exciting day with two of the machines getting very close to passing the Turing Test for the first time. In hosting the competition here, we wanted to raise the bar in Artificial Intelligence and although the machines aren’t yet good enough to fool all of the people all of the time, they are certainly at the stage of fooling some of the people some of the time.

Today’s results actually show a more complex story than a straight pass or fail by one machine. Where the machines were identified correctly by the human interrogators as machines, the conversational abilities of each machine was scored at 80 and 90%. This demonstrates how close machines are getting to reaching the milestone of communicating with us in a way in which we are comfortable. That eventual day will herald a new phase in our relationship with machines, bringing closer the time in which robots start to play an active role in our daily lives.

Here’s how Elbot opened a conversation with one of its human interrogators:

Judge: “Hi. How’s it going?”

Elbot: “I feel terrible today. This morning I made a mistake and poured milk over my breakfast instead of oil, and it rusted before I could eat it.”

Hah! The old robot-pretending-to-be-a-person-pretending-to-be-a-robot ruse. Elbot, you are the man!

UPDATE: You can chew the fat with Elbot here.

UPDATE: The Guardian’s Esther Addley is unimpressed.

No worries

Congress’s new and improved bailout bill includes, along with the obligatory helpings of pork, a provision that would increase FDIC bank deposit insurance from $100,000 to $250,000. Should the House pass the bill today, that boost in insurance would let a lot of Americans sleep a lot easier.

And that’s the problem.

When the government provides free insurance for an investment – any investment – it removes considerations of risk from investors’ decisions and, as a result, it distorts financial markets. In the worst-case scenario, that contributes to the kind of craziness that has put the world economy on a precipice. As Floyd Norris writes in the New York Times today:

As the ideas fly for saving the financial system, it is amazing — and appalling — how many of them seem to be straight out of the playbook from the savings and loan crisis. Then, as now, Congress decided to reassure investors by more than doubling the amount of deposits that could be insured … The raising of the deposit guarantee limits in 1980 to $100,000, from $40,000, made depositors less concerned about the health of their institution, and made it easier for dying institutions to attract deposits. Raising the figure to $250,000 now could have the same effect.

I’m not suggesting that insuring bank deposits is a bad thing. An even worse worst-case scenario is the kind of panic that leads to general runs on banks. What I don’t understand is why the insurance is set at a full 100% rather than, say, 80% or even 90%. By putting some fraction of deposits at risk, you’d at least provide a little incentive for people to be conscious of the health of the bank in which they’re putting their money, which in turn would put some additional pressure on banks to temper the risks they take.

Even if Congress had kept the full insurance on the first $100,000 of deposits and then provided 80% insurance for the next $150,000 of deposits, it would have injected a little more rationality into personal banking decisions. When it comes to money, panic is bad but nervousness is good.