Monthly Archives: December 2009

AWS: the new Chicago Edison

The key to running a successful large-scale utility is to match capacity (ie, capital) to demand, and the key to matching capacity to demand is to manipulate demand through pricing. The worst thing for a utility, particularly in the early stages of its growth, is to have unused capacity. At the end of the nineteenth century, Samuel Insull, president of the then-tiny Chicago Edison, started the electric utility revolution when he had the counterintuitive realization that to make more money his company had to cut its prices drastically, at least for those customers whose patterns of electricity use would help the utility maximize its capacity utilization.

Amazon Web Services is emerging as the Chicago Edison of utility computing. Perhaps because its background in retailing gives it a different perspective than that of traditional IT vendors, it has left those vendors in the dust when it comes to pioneering the new network-based model of supplying computing and storage capacity. Late yesterday, the company continued its innovations on the pricing front, announcing a new pricing model aimed at selling spare computing capacity, through its EC2 service, on a moment by moment basis. Buyers can bid for unused compute cycles in what is essentially a spot market for virtual computers. When their bid is higher than the spot price in the market, their virtual machines start running (at the spot price). When their bid falls below the spot price, their machines stop running, and the capacity is reallocated to those customers with higher bids.

Amazon’s spot market promises to significantly reduce the cost of computing tasks that don’t have immediate deadlines, such as large data-mining or other analytical efforts. And it promises to further increase Amazon’s capacity utilization, which will in turn allow Amazon to continue to reduce its prices, attract more customers, further smooth demand, and avoid wasted capital. As Insull discovered, cutting prices to optimize capacity utilization sets a virtuous cycle in motion.

In describing the new “spot instances” plan, AWS chief Werner Vogels used words that could have come out of Insull’s mouth a century ago:

Spot Instances are an innovation that is made possible by the unparalleled economies of scale created by the tremendous growth of the AWS Infrastructure Services. The broad Amazon EC2 customer base brings such diversity in workload and utilization patterns that it allows us to operate Amazon EC2 with extreme efficiency. True to the Amazon philosophy, we let our customers benefit from the economies of scale they help us create by lowering our prices when we achieve lower cost structures. Consistently we have lowered compute, storage and bandwidth prices based on such cost savings.

At Chicago Edison, Insull had nothing to lose. He had recently quit his executive position at Thomas Edison’s General Electric, the dominant player in on-premises electricity generation. No longer subject to the constraints of the old business model, which he had played a crucial role in establishing, he had the freedom to destroy that model. Amazon Web Services is also an outsider in the IT business, unbeholden to the constraints of the established and very lucrative business model, and that is the company’s great advantage.

UPDATE: Jonathan Boutelle, a founder of Slideshare, already has a strategy for gaming AWS’s spot market: bid high, buy low. That should be music to Amazon’s ears. If enough buyers pursue it, the spot price will quickly approach the set price.

Hypermultitasking

The Britannica Blog has been running a forum on multitasking this week, including posts from Maggie Jackson, Howard Rheingold, and Heather Gold. My own small contribution to the discussion appears today and is reprinted below:

Thank God for multitasking. Can you imagine how dull life would be if we humans lacked the ability to rapidly and seamlessly shift our focus from one task or topic to another? We wouldn’t be able to listen to the radio while driving, have conversations while cooking, juggle assignments at work, or even chew gum while walking. The world would grind to a depressing halt.

The ability to multitask is one of the essential strengths of our infinitely amazing brains. We wouldn’t want to lose it. But as neurobiologists and psychologists have shown, and as Maggie Jackson has carefully documented, we pay a price when we multitask. Because the depth of our attention governs the depth of our thought and our memory, when we multitask we sacrifice understanding and learning. We do more but know less. And the more tasks we juggle and the more quickly we switch between them, the higher the cognitive price we pay.

The problem today is not that we multitask. We’ve always multitasked. The problem is that we never stop multitasking. The natural busyness of our lives is being amplified by the networked gadgets that constantly send us messages and alerts, bombard us with other bits of important and trivial information, and generally interrupt the train of our thought. The data barrage never lets up. As a result, we devote ever less time to the calmer, more attentive modes of thinking that have always given richness to our intellectual lives and our culture—the modes of thinking that involve concentration, contemplation, reflection, introspection. The less we practice these habits of mind, the more we risk losing them altogether.

There’s evidence that, as Howard Rheingold suggests, we can train ourselves to be better multitaskers, to shift our attention even more swiftly and fluidly among contending chores and stimuli. And that will surely help us navigate the fast-moving stream of modern life. But improving our ability to multitask, neuroscience tells us in no uncertain terms, will never return to us the depth of understanding that comes with attentive, singleminded thought. You can improve your agility at multitasking, but you will never be able to multitask and engage in deep thought at the same time.

There’s an app(liance) for that

Cecilia Kang, who writes a blog about technology policy for the Washington Post, reports today that FCC chairman Julius Genachowski has been reading my book The Big Switch. Genachowski finds (as I did) that the story of the buildout of the electric grid in the early decades of the last century can shed light on today’s buildout of a computing grid (or, as we’ve taken to saying, “cloud”).

Though, obviously, electric power and information processing are very different technologies, their shift from a local supply model to a network supply model has followed a similar pattern and will have similar types of consequences. As I argue in the book, the computing grid promises to power the information economy of the 21st century as the electric grid powered the industrial economy of the 20th century. The building of the electric grid was itself a dazzling engineering achievement. But what turned out to be far more important was what companies and individuals did with the cheap and readily available electricity after the grid was constructed. The same, I’m sure, will be true of the infrastructure of cloud computing.

As Genachowski said, “An ‘app for that’ could have been the motto for America in the 20th century, too, if Madison Avenue had predated electricity.” Back in the 1920s and 30s, “app” would have stood for “appliance” rather than “application,” but the idea is largely the same.

A commercially and socially important network has profound policy implications, not the least of which concerns access. At a conference last week, Genachowski said that “the great infrastructure challenge of our time is the deployment and adoption of robust broadband networks that deliver the promise of high-speed Internet to all Americans.” Although a network can be a means of diffusing power, it can also be a means of concentrating it.

Web Wide World

Toward the end of his strange and haunting 1940 story “Tlön, Uqbar, Orbis Tertius,” Jorge Luis Borges described the origins of a conspiracy to inscribe in the “real world” first a fictional country, named Uqbar, and then, more ambitiously, an entire fictional planet, called Tlön:

In March of 1941 a letter written by Gunnary Erfjord was discovered in a book by Hinton which had belonged to Herbert Ashe. The envelope bore a cancellation from Ouro Preto; the letter completely elucidated the mystery of Tlön. Its text corroborated the hypotheses of Martinez Estrada. One night in Lucerne or in London, in the early seventeenth century, the splendid history has its beginning. A secret and benevolent society (amongst whose members were Dalgarno and later George Berkeley) arose to invent a country. Its vague initial program included “hermetic studies,” philanthropy and the cabala. From this first period dates the curious book by Andrea. After a few years of secret conclaves and premature syntheses it was understood that one generation was not sufficient to give articulate form to a country. They resolved that each of the masters should elect a disciple who would continue his work. This hereditary arrangement prevailed; after an interval of two centuries the persecuted fraternity sprang up again in America. In 1824, in Memphis (Tennessee), one of its affiliates conferred with the ascetic millionaire Ezra Buckley. The latter, somewhat disdainfully, let him speak – and laughed at the plan’s modest scope. He told the agent that in America it was absurd to invent a country and proposed the invention of a planet. To this gigantic idea he added another, a product of his nihilism: that of keeping the enormous enterprise a secret. At that time the twenty volumes of the Encyclopaedia Britannica were circulating in the United States; Buckley suggested that a methodical encyclopedia of the imaginary planet be written. He was to leave them his mountains of gold, his navigable rivers, his pasture lands roamed by cattle and buffalo, his Negroes, his brothels and his dollars, on one condition: “The work will make no pact with the impostor Jesus Christ.” Buckley did not believe in God, but he wanted to demonstrate to this nonexistent God that mortal man was capable of conceiving a world. Buckley was poisoned in Baton Rouge in 1828; in 1914 the society delivered to its collaborators, some three hundred in number, the last volume of the First Encyclopedia of Tlön. The edition was a secret one; its forty volumes (the vastest undertaking ever carried out by man) would be the basis for another more detailed edition, written not in English but in one of the languages of Tlön. This revision of an illusory world, was called, provisionally, Orbis Tertius and one of its modest demiurgi was Herbert Ashe, whether as an agent of Gunnar Erfjord or as an affiliate, I do not know. His having received a copy of the Eleventh Volume would seem to favor the latter assumption. But what about the others?

In 1942 events became more intense. I recall one of the first of these with particular clarity and it seems that I perceived then something of its premonitory character. It happened in an apartment on Laprida Street, facing a high and light balcony which looked out toward the sunset. Princess Faucigny Lucinge had received her silverware from Pointiers. From the vast depths of a box embellished with foreign stamps, delicate immobile objects emerged: silver from Utrecht and Paris covered with hard heraldic fauna, and a samovar. Amongst them – with the perceptible and tenuous tremor of a sleeping bird – a compass vibrated mysteriously. The princess did not recognize it. Its blue needle longed for magnetic north; its metal case was concave in shape; the letters around its edge corresponded to one of the alphabets of Tlön. Such was the first intrusion of this fantastic world into the world of reality.

I am still troubled by the stroke of chance which made me a witness of the second intrusion as well. It happened some months later, at a country store owned by a Brazilian in Cuchilla Negra. Amorim and I were returning from Sant’ Anna. The River Tacuarembo had flooded and we were obliged to sample (and endure) the proprietor’s rudimentary hospitality. He provided us with some creaking cots in a large room cluttered with barrels and hides. We went to bed, but were kept from sleeping until dawn by the drunken ravings of an unseen neighbor, who intermingled inextricable insults with snatches of milongas – or rather with snatches of the same milonga. As might be supposed, we attributed this insistent uproar to the store owner’s fiery cane liquor. By daybreak, the man was dead in the hallway. The roughness of his voice had deceived us: he was only a youth. In his delirium a few coins had fallen from his belt, along with a cone of bright metal, the size of a die. In vain a boy tried to pick up this cone. A man was scarcely able to raise it from the ground. I held it in my hand for a few minutes; I remember that its weight was intolerable and that after it was removed, the feeling of oppressiveness remained. I also remember the exact circle it pressed into my palm. The sensation of a very small and at the same time extremely heavy object produced a disagreeable impression of repugnance and fear. One of the local men suggested we throw it into the swollen river; Amorim acquired it for a few pesos. No one knew anything about the dead man, except that “he came from the border.” These small, very heavy cones (made from a metal which is not of this world) are images of the divinity in certain regions of Tlön.

Here I bring the personal part of my narrative to a close. The rest is in the memory (if not in the hopes or fears) of all my readers. Let it suffice for me to recall or mention the following facts, with a mere brevity of words which the reflective recollection of all will enrich or amplify. Around 1944, a person doing research for the newspaper The American (of Nashville, Tennessee) brought to light in a Memphis library the forty volumes of the First Encyclopedia of Tlön. Even today there is a controversy over whether this discovery was accidental or whether it was permitted by the directors of the still nebulous Orbis Tertius. The latter is most likely. Some of the incredible aspects of the Eleventh Volume (for example, the multiplication of the hronir) have been eliminated or attenuated in the Memphis copies; it is reasonable to imagine that these omissions follow the plan of exhibiting a world which is not too incompatible with the real world. The dissemination of objects from Tlön over different countries would complement this plan… The fact is that the international press infinitely proclaimed the “find.” Manuals, anthologies, summaries, literal versions, authorized re-editions and pirated editions of the Greatest Work of Man flooded and still flood the earth. Almost immediately, reality yielded on more than one account. The truth is that it longed to yield. Ten years ago any symmetry with a resemblance of order – dialectical materialism, anti-Semitism, Nazism – was sufficient to entrance the minds of men. How could one do other than submit to Tlön, to the minute and vast evidence of an orderly plant? It is useless to answer that reality is also orderly. Perhaps it is, but in accordance with divine laws – I translate: inhuman laws – which we never quite grasp. Tlön is surely a labyrinth, but it is a labyrinth devised by men, a labyrinth destined to be deciphered by men.

We are now coming to understand that the failure of the once much-hyped virtual world Second Life was inevitable. It was never that the Web would provide an alternative reality. It was that the Web, a labyrinth devised by men, would become reality. Reality, as Borges saw, longs to yield, to give way to a reduced but ordered simulation of itself. In the constraints imposed by software-mediated social and intellectual processes we find liberation, or at least relief. A meticulously manufactured Tlön can’t but displace an inhumanly arranged Earth.

The end of Borges’ story:

The contact and the habit of Tlön have disintegrated this world. Enchanted by its rigor, humanity forgets over and again that it is a rigor of chess masters, not of angels. … A scattered dynasty of solitary men has changed the face of the world. Their task continues. If our forecasts are not in error, a hundred years from now someone will discover the hundred volumes of the Second Encyclopedia of Tlön. Then English and French and mere Spanish will disappear from the globe. The world will be Tlön. I pay no attention to all this and go on revising, in the still days at the Adrogue hotel, an uncertain Quevedian translation (which I do not intend to publish) of Browne’s Urn Burial.

Throwing computers at health care

Computerworld reports on an extensive new Harvard Medical School study, appearing in the American Journal of Medicine, that paints a stark and troubling picture of the essential worthlessness of many of the computer systems that hospitals have invested in over the last few years. The researchers, led by Harvard’s David Himmelstein, begin their report by sketching out the hype that now surrounds health care automation:

Enthusiasm for health information technology spans the political spectrum, from Barack Obama to Newt Gingrich. Congress is pouring $19 billion into it. Health reformers of many stripes see computerization as a painless solution to the most vexing health policy problems, allowing simultaneous quality improvement and cost reduction …

In 2005, one team of analysts projected annual savings of $77.8 billion, whereas another foresaw more than $81 billion in savings plus substantial health gains from the nationwide adoption of optimal computerization. Today, the federal government’s health information technology website states (without reference) that “Broad use of health IT will: improve health care quality; prevent medical errors; reduce health care costs; increase administrative efficiencies; decrease paperwork; and expand access to affordable care.”

As was true of business computing systems in general, at least until the early years of this decade, it’s been taken on faith that big IT investments will translate into performance gains: If you buy IT, the rewards will come. Never mind that, as the researchers note, no actual studies “have examined the cost and quality impacts of computerization at a diverse national sample of hospitals.”

Now, at last, we have such a study. The researchers combed through data on IT spending, administrative costs, and quality of care at 4,000 US hospitals for the years 2003 through 2007. Their analysis found no correlation between IT investment and cost savings or efficiency at hospitals and in fact found some evidence of a link between aggressive IT spending and higher administrative costs. There appeared to be a slight correlation between IT spending and care quality, in some areas, though even here the link was tenuous:

We found no evidence that computerization has lowered costs or streamlined administration. Although bivariate analyses found higher costs at more computerized hospitals, multivariate analyses found no association. For administrative costs, neither bivariate nor multivariate analyses showed a consistent relationship to computerization. Although computerized physician order entry was associated with lower administrative costs in some years on bivariate analysis, no such association remained after adjustment for confounders. Moreover, hospitals that increased their computerization more rapidly had larger increases in administrative costs. More encouragingly, greater use of information technology was associated with a consistent though small increase in quality scores.

We used a variety of analytic strategies to search for evidence that computerization might be cost-saving. In cross-sectional analyses, we examined whether more computerized hospitals had lower costs or more efficient administration in any of the 5 years. We also looked for lagged effects, that is, whether cost-savings might emerge after the implementation of computerized systems. We looked for subgroups of computer applications, as well as individual applications, that might result in savings. None of these hypotheses were borne out. Even the select group of hospitals at the cutting edge of computerization showed neither cost nor efficiency advantages. Our longitudinal analysis suggests that computerization may actually increase administrative costs, at least in the near term.

The modest quality advantages associated with computerization are difficult to interpret. The quality scores reflect processes of care rather than outcomes; more information technology may merely improve scores without actually improving care, for example, by facilitating documentation of allowable exceptions …

[A]s currently implemented, health information technology has a modest impact on process measures of quality, but no impact on administrative efficiency or overall costs. Predictions of cost-savings and efficiency improvements from the widespread adoption of computers are premature at best.

There is a widespread faith, beginning at the very top of our government, that pouring money into computerization will lead to big improvements in both the cost and quality of health care. As this study shows, those assumptions need to be questioned – or a whole lot of taxpayer money may go to waste. Information technology has great promise for health care, but simply dumping cash into traditional commercial systems and applications is unlikely to achieve that promise – and may backfire by increasing costs further.