Google and the ethics of the cloud

The New Republic has published my comment on Google’s about-face on China. I reprint it here:

Google is being widely hailed for its announcement yesterday that it will stop censoring its search results in China, even if it means having to abandon that vast market. After years of compromising its own ideals on the free flow of information, the company is at last, it seems, putting its principles ahead of its business interests.

But Google’s motivations are not as pure as they may appear. While there’s almost certainly an ethical component to the company’s decision – Google and its founders have agonized in a very public way over their complicity in Chinese censorship – yesterday’s decision seems to have been spurred more by hard business calculations than soft moral ones. If Google had not, as it revealed in its announcement, “detected a highly sophisticated and targeted attack on our corporate infrastructure originating from China,” there’s no reason to believe it would have altered its policy of censoring search results to fit the wishes of the Chinese authorities. It was the attack, not a sudden burst of righteousness, that spurred Google’s action.

Google’s overriding business goal is to encourage us to devote more of our time and entrust more of our personal information to the Internet, particularly to the online computing cloud that is displacing the PC hard drive as the center of personal computing. The more that we use the Net, the more Google learns about us, the more frequently it shows us its ads, and the more money it makes. In order to continue to expand the time people spend online, Google and other Internet companies have to make the Net feel like a safe, well-protected space. If our trust in the Web is undermined in any way, we’ll retreat from the network and seek out different ways to communicate, compute, and otherwise store and process data. The consequences for Google’s business would be devastating.

Just as the early operators of passenger trains and airlines had, above all else, to convince the public that their services were safe, so Google has to convince the public that the Net is safe. Over the last few years, the company has assumed the role of the Web’s policeman. It encourages people to install anti-virus software on their PCs and take other measures to protect themselves from online crime. It identifies and isolates sites that spread malware. It plays a lead role in coordinating government and industry efforts to enhance network security and monitor and fight cyber attacks.

In this context, the “highly sophisticated” assault that Google says originated from China—it stopped short of blaming the Chinese government, though it said that the effort appeared to be aimed at discovering information about dissidents—threatens the very heart of the company’s business. Google admitted that certain of its customers’ Gmail accounts were compromised, a breach that, if expanded or repeated, would very quickly make all of us think twice before sharing personal information over the Web.

However important the Chinese market may be to Google, in either the short or the long term, it is less important than maintaining the integrity of the Net as a popular medium for information exchange. Like many other Western companies, Google has shown that it is willing to compromise its ideals in order to reach Chinese consumers. What it’s not willing to compromise is the security of the cloud, on which its entire business rests.

It is what you know

“It’s not what you know,” writes Google’s Marissa Mayer, “it’s what you can find out.” That’s as succinct a statement of Google’s intellectual ethic as I’ve come across. Forget “I think, therefore I am.” It’s now “I search, therefore I am.” It’s better to have access to knowledge than to have knowledge. “The Internet empowers,” writes Mayer, with a clumsiness of expression that bespeaks formulaic thought, “better decision-making and a more efficient use of time.”

The late Richard Poirier subtitled his dazzling critical exploration of Robert Frost’s poetry “the work of knowing.” At his best, wrote Poirier, Frost sought “to promote in writing and in reading an inquisitiveness about what cannot quite be signified. He leads us toward a kind of knowing that belongs to dream and reverie on the far side of the labor of mind or of body.” For Google “what cannot quite be signified” does not exist. In place of inquisitiveness we have acquisitiveness: information as commodity, thought as transaction.

“The Internet,” writes Mayer, “can facilitate an incredible persistence and availability of information, but given the Internet’s adolescence, all of the information simply isn’t there yet. I find that in some ways my mind has evolved to this new way of thinking, relying on the information’s existence and availability, so much so that it’s almost impossible to conclude that the information isn’t findable because it just isn’t online.” When Mayer says her “mind has evolved” to the point that it can only recognize and process information that has been digitized and uploaded, she is confessing to undergoing an intellectual dehumanization. She is confessing to being computerized.

Poirier:

[Frost] insists on our acknowledging in each and every poem, however slight, that poetry is a “made” thing. So, too, is truth. Thus, the quality which allows the poetry to seem familiar and recognizable as such, that makes it “beautiful,” is derivative of a larger conviction he shares with the William James of Pragmatism. “Truth,” James insisted, “is not a stagnant property … Truth is made, just as health, wealth and strength are made, in the course of experience.”

It’s not what you can find out, Frost and James and Poirier told us; it’s what you know. Truth is self-created through labor, through the hard, inefficient, unscripted work of the mind, through the indirection of dream and reverie. What matters is what cannot be rendered as code. Google can give you everything but meaning.

Mr. Tracy’s library

Edge’s annual question for 2010 is “How is the Internet changing the way you think?” Some 170 folks submitted answers, including me. (I found it a bit of a challenge, since I wanted to avoid pre-plagiarizing my upcoming book, which happens to be on this subject.) Here’s my submission:

As the school year began last September, Cushing Academy, an elite Massachusetts prep school that’s been around since Civil War days, announced that it was emptying its library of books. In place of the thousands of volumes that had once crowded the building’s shelves, the school was installing, it said, “state-of-the-art computers with high-definition screens for research and reading” as well as “monitors that provide students with real-time interactive data and news feeds from around the world.” Cushing’s bookless library would become, boasted headmaster James Tracy, “a model for the 21st-century school.”

The story gained little traction in the press—it came and went as quickly as a tweet—but to me it felt like a cultural milestone. A library without books would have seemed unthinkable just twenty years ago. Today, the news almost seems overdue. I’ve made scores of visits to libraries over the last couple of years. Every time, I’ve seen more people peering into computer screens than thumbing through pages. The primary role played by libraries today seems to have already shifted from providing access to printed works to providing access to the Internet. There’s every reason to believe that trend will only accelerate.

“When I look at books, I see an outdated technology,” Mr. Tracy told a reporter from the Boston Globe. His charges would seem to agree. A 16-year-old student at the school took the disappearance of the library books in stride. “When you hear the word ‘library,’ you think of books,” she said. “But very few students actually read them.”

What makes it easy for an educational institution like Cushing to jettison its books is the assumption that the words in books are the same whether they’re printed on paper or formed of pixels or E Ink on a screen. A word is a word is a word. “If I look outside my window and I see my student reading Chaucer under a tree,” said Mr. Tracy, giving voice to this common view, “it is utterly immaterial to me whether they’re doing so by way of a Kindle or by way of a paperback.” The medium, in other words, doesn’t matter.

But Mr. Tracy is wrong. The medium does matter. It matters greatly. The experience of reading words on a networked computer, whether it’s a PC, an iPhone, or a Kindle, is very different from the experience of reading those same words in a book. As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It’s designed to scatter our attention. It doesn’t shield us from environmental distractions; it adds to them. The words on a computer screen exist in a welter of contending stimuli.

The human brain, science tells us, adapts readily to its environment. The adaptation occurs at a deep biological level, in the way our nerve cells, or neurons, connect. The technologies we think with, including the media we use to gather, store, and share information, are critical elements of our intellectual environment and they play important roles in shaping our modes of thought. That fact has not only been proven in the laboratory; it’s evident from even a cursory glance at the course of intellectual history. It may be immaterial to Mr. Tracy whether a student reads from a book or a screen, but it is not immaterial to that student’s mind.

My own reading and thinking habits have shifted dramatically since I first logged onto the Web fifteen or so years ago. I now do the bulk of my reading and researching online. And my brain has changed as a result. Even as I’ve become more adept at navigating the rapids of the Net, I have experienced a steady decay in my ability to sustain my attention. As I explained in an Atlantic Monthly essay in 2008, “what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles.” Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.

There are as many human brains as there are human beings. I expect, therefore, that reactions to the Net’s influence, and hence to this year’s Edge question, will span many points of view. Some people will find in the busy interactivity of the networked screen an intellectual environment ideally suited to their mental proclivities. Others will see a catastrophic erosion in the ability of human beings to engage in calmer, more meditative modes of thought. A great many will likely be somewhere between the extremes, thankful for the Net’s riches but worried about its long-term effects on the depth of individual intellect and collective culture.

My own experience leads me to believe that what we stand to lose will be at least as great as what we stand to gain. I feel sorry for the kids at Cushing Academy.

AWS: the new Chicago Edison

The key to running a successful large-scale utility is to match capacity (ie, capital) to demand, and the key to matching capacity to demand is to manipulate demand through pricing. The worst thing for a utility, particularly in the early stages of its growth, is to have unused capacity. At the end of the nineteenth century, Samuel Insull, president of the then-tiny Chicago Edison, started the electric utility revolution when he had the counterintuitive realization that to make more money his company had to cut its prices drastically, at least for those customers whose patterns of electricity use would help the utility maximize its capacity utilization.

Amazon Web Services is emerging as the Chicago Edison of utility computing. Perhaps because its background in retailing gives it a different perspective than that of traditional IT vendors, it has left those vendors in the dust when it comes to pioneering the new network-based model of supplying computing and storage capacity. Late yesterday, the company continued its innovations on the pricing front, announcing a new pricing model aimed at selling spare computing capacity, through its EC2 service, on a moment by moment basis. Buyers can bid for unused compute cycles in what is essentially a spot market for virtual computers. When their bid is higher than the spot price in the market, their virtual machines start running (at the spot price). When their bid falls below the spot price, their machines stop running, and the capacity is reallocated to those customers with higher bids.

Amazon’s spot market promises to significantly reduce the cost of computing tasks that don’t have immediate deadlines, such as large data-mining or other analytical efforts. And it promises to further increase Amazon’s capacity utilization, which will in turn allow Amazon to continue to reduce its prices, attract more customers, further smooth demand, and avoid wasted capital. As Insull discovered, cutting prices to optimize capacity utilization sets a virtuous cycle in motion.

In describing the new “spot instances” plan, AWS chief Werner Vogels used words that could have come out of Insull’s mouth a century ago:

Spot Instances are an innovation that is made possible by the unparalleled economies of scale created by the tremendous growth of the AWS Infrastructure Services. The broad Amazon EC2 customer base brings such diversity in workload and utilization patterns that it allows us to operate Amazon EC2 with extreme efficiency. True to the Amazon philosophy, we let our customers benefit from the economies of scale they help us create by lowering our prices when we achieve lower cost structures. Consistently we have lowered compute, storage and bandwidth prices based on such cost savings.

At Chicago Edison, Insull had nothing to lose. He had recently quit his executive position at Thomas Edison’s General Electric, the dominant player in on-premises electricity generation. No longer subject to the constraints of the old business model, which he had played a crucial role in establishing, he had the freedom to destroy that model. Amazon Web Services is also an outsider in the IT business, unbeholden to the constraints of the established and very lucrative business model, and that is the company’s great advantage.

UPDATE: Jonathan Boutelle, a founder of Slideshare, already has a strategy for gaming AWS’s spot market: bid high, buy low. That should be music to Amazon’s ears. If enough buyers pursue it, the spot price will quickly approach the set price.

Hypermultitasking

The Britannica Blog has been running a forum on multitasking this week, including posts from Maggie Jackson, Howard Rheingold, and Heather Gold. My own small contribution to the discussion appears today and is reprinted below:

Thank God for multitasking. Can you imagine how dull life would be if we humans lacked the ability to rapidly and seamlessly shift our focus from one task or topic to another? We wouldn’t be able to listen to the radio while driving, have conversations while cooking, juggle assignments at work, or even chew gum while walking. The world would grind to a depressing halt.

The ability to multitask is one of the essential strengths of our infinitely amazing brains. We wouldn’t want to lose it. But as neurobiologists and psychologists have shown, and as Maggie Jackson has carefully documented, we pay a price when we multitask. Because the depth of our attention governs the depth of our thought and our memory, when we multitask we sacrifice understanding and learning. We do more but know less. And the more tasks we juggle and the more quickly we switch between them, the higher the cognitive price we pay.

The problem today is not that we multitask. We’ve always multitasked. The problem is that we never stop multitasking. The natural busyness of our lives is being amplified by the networked gadgets that constantly send us messages and alerts, bombard us with other bits of important and trivial information, and generally interrupt the train of our thought. The data barrage never lets up. As a result, we devote ever less time to the calmer, more attentive modes of thinking that have always given richness to our intellectual lives and our culture—the modes of thinking that involve concentration, contemplation, reflection, introspection. The less we practice these habits of mind, the more we risk losing them altogether.

There’s evidence that, as Howard Rheingold suggests, we can train ourselves to be better multitaskers, to shift our attention even more swiftly and fluidly among contending chores and stimuli. And that will surely help us navigate the fast-moving stream of modern life. But improving our ability to multitask, neuroscience tells us in no uncertain terms, will never return to us the depth of understanding that comes with attentive, singleminded thought. You can improve your agility at multitasking, but you will never be able to multitask and engage in deep thought at the same time.

There’s an app(liance) for that

Cecilia Kang, who writes a blog about technology policy for the Washington Post, reports today that FCC chairman Julius Genachowski has been reading my book The Big Switch. Genachowski finds (as I did) that the story of the buildout of the electric grid in the early decades of the last century can shed light on today’s buildout of a computing grid (or, as we’ve taken to saying, “cloud”).

Though, obviously, electric power and information processing are very different technologies, their shift from a local supply model to a network supply model has followed a similar pattern and will have similar types of consequences. As I argue in the book, the computing grid promises to power the information economy of the 21st century as the electric grid powered the industrial economy of the 20th century. The building of the electric grid was itself a dazzling engineering achievement. But what turned out to be far more important was what companies and individuals did with the cheap and readily available electricity after the grid was constructed. The same, I’m sure, will be true of the infrastructure of cloud computing.

As Genachowski said, “An ‘app for that’ could have been the motto for America in the 20th century, too, if Madison Avenue had predated electricity.” Back in the 1920s and 30s, “app” would have stood for “appliance” rather than “application,” but the idea is largely the same.

A commercially and socially important network has profound policy implications, not the least of which concerns access. At a conference last week, Genachowski said that “the great infrastructure challenge of our time is the deployment and adoption of robust broadband networks that deliver the promise of high-speed Internet to all Americans.” Although a network can be a means of diffusing power, it can also be a means of concentrating it.

Web Wide World

Toward the end of his strange and haunting 1940 story “Tlön, Uqbar, Orbis Tertius,” Jorge Luis Borges described the origins of a conspiracy to inscribe in the “real world” first a fictional country, named Uqbar, and then, more ambitiously, an entire fictional planet, called Tlön:

In March of 1941 a letter written by Gunnary Erfjord was discovered in a book by Hinton which had belonged to Herbert Ashe. The envelope bore a cancellation from Ouro Preto; the letter completely elucidated the mystery of Tlön. Its text corroborated the hypotheses of Martinez Estrada. One night in Lucerne or in London, in the early seventeenth century, the splendid history has its beginning. A secret and benevolent society (amongst whose members were Dalgarno and later George Berkeley) arose to invent a country. Its vague initial program included “hermetic studies,” philanthropy and the cabala. From this first period dates the curious book by Andrea. After a few years of secret conclaves and premature syntheses it was understood that one generation was not sufficient to give articulate form to a country. They resolved that each of the masters should elect a disciple who would continue his work. This hereditary arrangement prevailed; after an interval of two centuries the persecuted fraternity sprang up again in America. In 1824, in Memphis (Tennessee), one of its affiliates conferred with the ascetic millionaire Ezra Buckley. The latter, somewhat disdainfully, let him speak – and laughed at the plan’s modest scope. He told the agent that in America it was absurd to invent a country and proposed the invention of a planet. To this gigantic idea he added another, a product of his nihilism: that of keeping the enormous enterprise a secret. At that time the twenty volumes of the Encyclopaedia Britannica were circulating in the United States; Buckley suggested that a methodical encyclopedia of the imaginary planet be written. He was to leave them his mountains of gold, his navigable rivers, his pasture lands roamed by cattle and buffalo, his Negroes, his brothels and his dollars, on one condition: “The work will make no pact with the impostor Jesus Christ.” Buckley did not believe in God, but he wanted to demonstrate to this nonexistent God that mortal man was capable of conceiving a world. Buckley was poisoned in Baton Rouge in 1828; in 1914 the society delivered to its collaborators, some three hundred in number, the last volume of the First Encyclopedia of Tlön. The edition was a secret one; its forty volumes (the vastest undertaking ever carried out by man) would be the basis for another more detailed edition, written not in English but in one of the languages of Tlön. This revision of an illusory world, was called, provisionally, Orbis Tertius and one of its modest demiurgi was Herbert Ashe, whether as an agent of Gunnar Erfjord or as an affiliate, I do not know. His having received a copy of the Eleventh Volume would seem to favor the latter assumption. But what about the others?

In 1942 events became more intense. I recall one of the first of these with particular clarity and it seems that I perceived then something of its premonitory character. It happened in an apartment on Laprida Street, facing a high and light balcony which looked out toward the sunset. Princess Faucigny Lucinge had received her silverware from Pointiers. From the vast depths of a box embellished with foreign stamps, delicate immobile objects emerged: silver from Utrecht and Paris covered with hard heraldic fauna, and a samovar. Amongst them – with the perceptible and tenuous tremor of a sleeping bird – a compass vibrated mysteriously. The princess did not recognize it. Its blue needle longed for magnetic north; its metal case was concave in shape; the letters around its edge corresponded to one of the alphabets of Tlön. Such was the first intrusion of this fantastic world into the world of reality.

I am still troubled by the stroke of chance which made me a witness of the second intrusion as well. It happened some months later, at a country store owned by a Brazilian in Cuchilla Negra. Amorim and I were returning from Sant’ Anna. The River Tacuarembo had flooded and we were obliged to sample (and endure) the proprietor’s rudimentary hospitality. He provided us with some creaking cots in a large room cluttered with barrels and hides. We went to bed, but were kept from sleeping until dawn by the drunken ravings of an unseen neighbor, who intermingled inextricable insults with snatches of milongas – or rather with snatches of the same milonga. As might be supposed, we attributed this insistent uproar to the store owner’s fiery cane liquor. By daybreak, the man was dead in the hallway. The roughness of his voice had deceived us: he was only a youth. In his delirium a few coins had fallen from his belt, along with a cone of bright metal, the size of a die. In vain a boy tried to pick up this cone. A man was scarcely able to raise it from the ground. I held it in my hand for a few minutes; I remember that its weight was intolerable and that after it was removed, the feeling of oppressiveness remained. I also remember the exact circle it pressed into my palm. The sensation of a very small and at the same time extremely heavy object produced a disagreeable impression of repugnance and fear. One of the local men suggested we throw it into the swollen river; Amorim acquired it for a few pesos. No one knew anything about the dead man, except that “he came from the border.” These small, very heavy cones (made from a metal which is not of this world) are images of the divinity in certain regions of Tlön.

Here I bring the personal part of my narrative to a close. The rest is in the memory (if not in the hopes or fears) of all my readers. Let it suffice for me to recall or mention the following facts, with a mere brevity of words which the reflective recollection of all will enrich or amplify. Around 1944, a person doing research for the newspaper The American (of Nashville, Tennessee) brought to light in a Memphis library the forty volumes of the First Encyclopedia of Tlön. Even today there is a controversy over whether this discovery was accidental or whether it was permitted by the directors of the still nebulous Orbis Tertius. The latter is most likely. Some of the incredible aspects of the Eleventh Volume (for example, the multiplication of the hronir) have been eliminated or attenuated in the Memphis copies; it is reasonable to imagine that these omissions follow the plan of exhibiting a world which is not too incompatible with the real world. The dissemination of objects from Tlön over different countries would complement this plan… The fact is that the international press infinitely proclaimed the “find.” Manuals, anthologies, summaries, literal versions, authorized re-editions and pirated editions of the Greatest Work of Man flooded and still flood the earth. Almost immediately, reality yielded on more than one account. The truth is that it longed to yield. Ten years ago any symmetry with a resemblance of order – dialectical materialism, anti-Semitism, Nazism – was sufficient to entrance the minds of men. How could one do other than submit to Tlön, to the minute and vast evidence of an orderly plant? It is useless to answer that reality is also orderly. Perhaps it is, but in accordance with divine laws – I translate: inhuman laws – which we never quite grasp. Tlön is surely a labyrinth, but it is a labyrinth devised by men, a labyrinth destined to be deciphered by men.

We are now coming to understand that the failure of the once much-hyped virtual world Second Life was inevitable. It was never that the Web would provide an alternative reality. It was that the Web, a labyrinth devised by men, would become reality. Reality, as Borges saw, longs to yield, to give way to a reduced but ordered simulation of itself. In the constraints imposed by software-mediated social and intellectual processes we find liberation, or at least relief. A meticulously manufactured Tlön can’t but displace an inhumanly arranged Earth.

The end of Borges’ story:

The contact and the habit of Tlön have disintegrated this world. Enchanted by its rigor, humanity forgets over and again that it is a rigor of chess masters, not of angels. … A scattered dynasty of solitary men has changed the face of the world. Their task continues. If our forecasts are not in error, a hundred years from now someone will discover the hundred volumes of the Second Encyclopedia of Tlön. Then English and French and mere Spanish will disappear from the globe. The world will be Tlön. I pay no attention to all this and go on revising, in the still days at the Adrogue hotel, an uncertain Quevedian translation (which I do not intend to publish) of Browne’s Urn Burial.