Category Archives: Uncategorized

Pages and “pages”

In reading some of the comments posted online about my Atlantic piece, I kept coming across references to the article being “four pages long.” At first I wondered, “Can’t these people count? The article is six pages long!” (OK, five pages if you exclude the illustration and titling.) Then I realized – duh! – that people were referring to the online version of the article, which indeed is divided into four “pages.” (Of course, a page of text on the web is an arbitrary construct, so knowing the number of web pages doesn’t actually tell you much about the length of a piece, but that’s another story.)

Anyway, it would be interesting to do a study of how the experience of reading a particular piece of writing varies depending on whether a person reads it in print or online. A couple of people have pointed to the inclusion of hyperlinks in the web version of my article as showing the superiority of the web as a medium for writing. I don’t buy that (even though I’m well aware of the value of links). These days, I share Jon Udell’s sense of relief in reading text without links. Jon writes: “Nick Carr’s essay in the current Atlantic Monthly crystallizes a lot of what I’ve been feeling for a couple of years about how our use of the Net is changing us. Not co-incidentally I read the essay in the printed magazine whose non-hypertextuality I experienced as a feature, not a bug.” Hyperlinks have a lot of utility, but they’re distractions as well, scattering concentration and, often, getting in the way of deep reading.

Now go click on that link and read the rest of Udell’s post.

The scatterbrained

In the few days since my essay on the Internet’s effects on cognition appeared in The Atlantic, I’ve been flooded with emails and blog posts from people saying that my struggles with deep reading and concentration mirror their own experiences. “I found the first couple of pages almost eerie in how well they described my own feelings,” wrote a typical correspondent today. Uber-blogger Andrew Sullivan said in a post a short while ago that my story “strikes close to home.”

I feel an odd mix of emotions. I’m of course gratified to see further evidence confirming my hypothesis, but it’s a melancholy sense of gratification since the hypothesis is (to me, anyway) such an unhappy one. There should be a word to describe the feeling of finding support for one’s gloomy idea.

Microsoft to put “many millions” of servers in cloud

Bill Gates, in his farewell address at Microsoft’s TechEd developer conference today, sketched out Microsoft’s expansive plan for cloud computing. The company, he said, will have “many millions” of servers in a network of data centers. Those centers will ultimately provide as utility services everything done today by traditional Microsoft software installed on local servers:

We’re taking everything we do at the server level, and saying that we will have a service that mirrors that exactly. The simplest one of those is to say, okay, I can run Exchange on premise, or I can connect up to it as a service. But even at the BizTalk level, we’ll have BizTalk Services. For SQL, we’ll have SQL Server Data Services, and so you can connect up, build the database. It will be hosted in our cloud with the big, big data center, and geo-distributed automatically. This is kind of fascinating because it’s getting us to think about data centers at a scale that never existed before. Literally today we have, in our data center, many hundreds of thousands of servers, and in the future we’ll have many millions of those servers.

The design of massive data centers, said Gates, is one of the key areas of innovation in computing today, and the huge investments required will limit the construction of cloud-computing centers to just a handful of companies:

When you think about the design of how you bring the power in, how you deal with the heating, what sort of sensors do you have, what kind of design do you want for the motherboard, you can be very radical, in fact, come up with some huge improvements as you design for this type of scale. And so going from a single server all the way up to this mega data center that Microsoft, and only a few others will have, it gives you an option to run pieces of your software at that level.

Cloud services will be offered through three different business models, said Gates: “Some of these will be free, some will be ad-supported. A number, the ones that provide rich [service] guarantees, will be provided on a commercial basis [ie, through subscription or other fees].”

Gates also suggested that different kinds of services would be provided in different locations, depending on the capacity of local networks:

… if you look at the really utilitarian uses of the Internet, a lot of those can be done at fairly low bandwidth, even with a cell phone in a rural village in say Africa or India looking at the crop prices or your health records or getting advice and things like that. So, we will have to start to think of the Internet as including parts that are not super high bandwidth and adopting applications for that, and then another part which is more in the rich-world urban type area where you can assume [the availability] of very high bandwidth.

That’s a good point: It will be a long, long time before the cloud covers the globe uniformly.

Was eBay a fad?

We already know that the famously cute story of eBay’s origin – founder Pierre Omidyar launched the site to help his fiancee trade the PEZ dispensers she collected – was a lie cooked up by a PR operative. We also know that the company’s vaunted “reputation system” – the foundation of what has long been perceived as a radically new kind of self-organizing and self-policing commercial community – has been crumbling.

Now we’re beginning to find out that eBay’s seemingly revolutionary core – the online auction – may have been a fad all along. As Business Week reports, eBay’s auctions are “a dying breed.” Buyers and sellers are reverting to the traditional retailing model of fixed prices:

Auctions were once a pillar of e-commerce. People didn’t simply shop on eBay. They hunted, they fought, they sweated, they won. These days, consumers are less enamored of the hassle of auctions, preferring to buy stuff quickly at a fixed price … “If I really want something I’m not going to goof around [in auctions] for a small savings,” says Dave Dribin, a 34-year-old Chicago resident who used to bid on eBay items, but now only buys retail …

At the current pace, this may be the first year that eBay generates more revenue from fixed-price sales than from auctions, analysts say. “The bloom is well off the rose with regard to the online-auction thing,” says Tim Boyd, an analyst with American Technology Research. “Auctions are losing a ton of share, and fixed price has been gaining pretty steadily.”

Back in 1999 the big news in online retailing was the rush by companies like Yahoo and Amazon to roll out auction sites in emulation of eBay. Auctions had become, as CNET reported at the time, “a ‘must-have’ element for e-commerce sites.” On the day that Amazon launched its auction business, the company’s stock jumped 8 percent. “Fixed prices are only a 100-year-old phenomenon,” Patti Maes, of MIT’s Media Lab, told Business Week in a 1999 cover story. ”I think they will disappear online, simply because it is possible – cheap and easy – to vary prices online.”

Maes, and many others, made the mistake of exaggerating the benefits of the new and discounting the benefits of the old.

EBay made a ton of money running auctions over the past ten years, and it may continue to be successful as the operator of an online mall. But it is not the company we imagined it to be. Its story has become a cautionary tale about the dangers of wishful thinking and fanciful extrapolation.

What we talk about when we talk about singularity

The Singularity, that much-anticipated moment, or nano-moment, when our once-tractable silicon servants rocket past us, intellectually speaking, in a blur not unlike the one you see when Scotty activates the Enterprise’s warp drive on Star Trek, pausing only (we pray) to allow us to virtualize our mental circuitry and upload it into their capacious memory banks (watch for the 2035 launch of Amazon S4: Simple Soul Storage Service), thus achieving a sort of neutered, brain-in-a-jar immortality, yes, that Singularity, that Rapture of the Geeks, as it is known to snarky unbelievers, is the subject of a big stack of articles – all written by humans, alas, but worth reading nonetheless – in a new special issue of IEEE Spectrum.

Vernor Vinge, the original Singularitarian, lays out five scenarios that, in some combination, could give rise to “a posthuman epoch”:

The AI Scenario: We create superhuman artificial intelligence (AI) in computers.

The IA Scenario: We enhance human intelligence through human-to-computer interfaces—that is, we achieve intelligence amplification (IA).

The Biomedical Scenario: We directly increase our intelligence by improving the neurological operation of our brains.

The Internet Scenario: Humanity, its networks, computers, and databases become sufficiently effective to be considered a superhuman being.

The Digital Gaia Scenario: The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being.

“Depending on our inventiveness – and our artifacts’ inventiveness – there is the possibility,” writes Vinge, “of a transformation comparable to the rise of human intelligence in the biological world. Even if the singularity does not happen, we are going to have to put up with singularity enthusiasms for a long time. Get used to it.”

The special issue includes both enthusiasms and skepticisms, sometimes in the same article. Glenn Zorpette, the executive editor of IEEE Spectrum, takes it as a given that “as computers become stupendously powerful” in coming years “life really is going to get more interesting,” but he pooh-poohs the suggestion, popularized by Ray Kurzweil, that human immortality will be a byproduct of the Singularity:

Why should a mere journalist question Kurzweil’s conclusion that some of us alive today will live indefinitely? Because we all know it’s wrong. We can sense it in the gaping, take-my-word-for-it extrapolations and the specious reasoning of those who subscribe to this form of the singularity argument. Then, too, there’s the flawed grasp of neuroscience, human physiology, and philosophy. Most of all, we note the willingness of these people to predict fabulous technological advances in a period so conveniently short it offers themselves hope of life everlasting. This has all gone on too long. The emperor isn’t wearing anything, for heaven’s sake.

(But at least he’s buff, thanks to all those supplements.)

It may seem a waste of time to debate the contours of a world that, as Vinge says, will be “intrinsically unintelligible to the likes of us.” But, hey, you have to do something to pass the time while waiting for Godot 2.0.

My favorite article is the practical-minded “Economics of the Singularity,” in which George Mason University economist Robin Hanson sketches out the marketplace of the posthuman epoch. Hanson believes that the best chance for creating an advanced machine intelligence will be through simply “copying the brain”:

This approach, known as whole brain emulation, starts with a real human brain, scanned in enough detail to see the exact location and type of each part of each neuron, such as dendrites, axons, and synapses. Then, using models of how each of these neuronal components turns input signals into output signals, you would construct a computer model of this specific brain. With accurate enough models and scans, the final simulation should have the same input-output behavior as the original brain. It would, in a sense, be the “uploaded mind” of whoever served as the template …

Though it might cost many billions of dollars to build one such machine, the first copy might cost only millions and the millionth copy perhaps thousands or less. Mass production could then supply what has so far been the one factor of production that has remained critically scarce throughout human history: intelligent, highly trained labor.

Once that constraint is removed – and smarts become endlessly abundant – we’ll see “the next radical jump in economic growth,” where “the world economy, which now doubles in 15 years or so, would soon double in somewhere from a week to a month.” Three factors would spur the explosion in growth:

First, we could create capable machines in much less time than it takes to breed, rear, and educate new human workers. Being able to make and retire machine workers as fast as needed could easily double or quadruple growth rates.

Second, the cost of computing has long been falling much faster than the economy has been growing. When the workforce is largely composed of computers, the cost of making workers will therefore fall at that faster rate, with all that this entails for economic growth.

Third, as the economy begins growing faster, computer usage and the resources devoted to developing computers will also grow faster. And because innovation is faster when more people use and study something, we should expect computer performance to improve even faster than in the past.

For humans forced to compete with the vast machine horde, the prospects would seem to be pretty dim:

The population of smart machines would explode even faster than the economy. So even though total wealth would increase very rapidly, wealth per machine would fall rapidly. If these smart machines are considered “people,” then most people would be machines, and per-person wealth and wages would quickly fall to machine-subsistence levels, which would be far below human-subsistence levels. Salaries would probably be just high enough to cover the rent on a tiny body, a few cubic centimeters of space, the odd spare part, a few watts of energy and heat dumping, and a Net connection.

The diminutive machine-people would cluster like insects in vast urban communities, “with many billions living in the volume of a current skyscraper, paying astronomical rents that would exclude most humans. As emulations of humans, these creatures would do the same sorts of things … that humans have done for hundreds of thousands of years: form communities and coalitions, fall in love, gossip, argue, make art, commit crimes, get work done, innovate, and have fun.”

Hanson doesn’t speculate on what will be left for humans to do in this world, but I think the answer probably lies in the machine-people’s desire to “have fun.” Though lacking human bodies to go along with their human minds, the machine-people will, one assumes, have both phantom limbs and phantom desires. As a result, we can expect that the online porn industry will expand exponentially to the point where it employs, in one capacity or another, all remaining human beings. It won’t exactly be heaven on earth, but it sure beats being a brain in a database.

Understanding Amazon Web Services

There are two ways to look at Amazon.com: as a retailer, and as a software company that runs a retailing application. Both are accurate, and in combination they explain why Amazon, rather than a traditional computer company, has become the most successful early mover in supplying computing as a utility service. For Amazon, running a cloud computing service is core to its business in a way that it isn’t for, say, IBM, Sun, or HP.

In a brief but illuminating video interview with Om Malik, Amazon CEO Jeff Bezos underscores this point in describing the origins of Amazon Web Services. “Four years ago is when it started,” he says, “and we had enough complexity inside Amazon that we were finding we were spending too much time on fine-grained coordination between our network engineering groups and our applications programming groups. Basically what we decided to do is build a [set of APIs] between those two layers so that you could just do coarse-grained coordination between those two groups. Amazon is, you know, just a web-scale application.” As it developed the APIs for its own applications developers, it realized that the interfaces would be useful, as well, to other programmers of web apps: “And so, look, let’s make it a new business. It has the potential one day to be a meaningful business for the company, and we need to do it for ourselves anyway.”

Bezos goes on to note that Amazon’s retailing operation is “a low gross margin business” compared to software and technology businesses, which “tend to have very high margins.” The relatively low profitability of the retailing business gave Amazon the incentive to create a highly efficient, highly automated computing system, which in turn could become the foundation for a set of cloud computing services that could be sold at low enough prices to attract a large clientele. It also made a low-margin utility business attractive to the firm in a way that it isn’t for a lot of large tech companies who are averse to making big capital investments in new, low-margin businesses.

“On the surface, superficially, [cloud computing] appears to be very different [from our retailing business],” Bezos sums up. “But the fact is we’ve been running a web-scale application for a long time, and we needed to build this set of infrastructure web services just to be able to manage our own internal house.”

This is the great advantage that, at this early stage in the evolution of the utility computing industry, is held by companies like Amazon and Google. Building an efficient cloud-computing infrastructure does not represent an added expense for them; it’s a prerequisite to the success of their existing business.