Category Archives: Uncategorized

Information wants to be free my ass

Never before in history have people paid as much for information as they do today.

I’m guessing that by the time you reached the end of that sentence, you found yourself ROFLAO. I mean, WTF, this the Era of Abundance, isn’t it? The Age of Free. Digital manna rains from the heavens.

Sorry, sucker. The joke’s on you.

Do the math. Sit down right now, and add up what you pay every month for:

-Internet service

-Cable TV service

-Cellular telephone service (voice, data, messaging)

-Landline telephone service

-Satellite radio

-Netflix

-Wi-Fi hotspots

-TiVO

-Other information services

So what’s the total? $100? $200? $300? $400? Gizmodo reports that monthly information subscriptions and fees can easily run to $500 or more nowadays. A lot of people today probably spend more on information than they spend on food.

The reason we fork out all that dough is (I’m going to whisper the rest of this sentence) because we place a high monetary value on the content we receive as a result of those subscriptions and fees.

Now somebody remind me how we all came to think that information wants to be free.

It’s a strange world we live in. We begrudge the folks who actually create the stuff we enjoy reading, listening to, and watching a few pennies for their labor, and yet at the very same time we casually throw hundreds of hard-earned bucks at the saps who run the stupid networks through which the stuff is delivered. We screw the struggling artist, and pay the suit.

Somebody’s got a good thing going.

UPDATE: Alan Jacobs, over at Text Patterns, adds an interesting gloss to this post:

One of Nick’s commenters suggests that his point is misleading because we’re not paying all that much per bit of data. That’s probably true, but it may not make the point the commenter wants it to make. Consider an analogy to restaurant dining: Americans in the past twenty years have spent far, far more on eating out than any of their ancestors did, and that’s a significant development even if you point out that huge portions of fat-laden food mean that they’re not paying all that much per calorie. In fact, that analogy may work on more than one level: are we unhealthily addicted to information (of any kind, and regardless of quality) in the same way that we’re addicted to fatty foods?

Back when we were more conscious of what particular bits of information we were spending our information dollars on was our information diet (so to speak) healthier than it is today when we buy tickets to all-you-can-eat buffets? I’m not sure I know the answer to that question, but it’s a question worth pondering. It certainly underscores how silly it is to simply try to measure “cost per unit of information” as if that alone tells us anything. Unless, of course, you believe that every unit of information is identical in terms of quality and value.

Other people’s privacy

In the wake of Google’s revelation last week of a concerted, sophisticated cyber attack on many corporate networks, including its own Gmail service, Eric Schmidt’s recent comments about privacy become even more troubling. As you’ll recall, in a December 3 CNBC interview, Schmidt said, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place. But if you really need that kind of privacy, the reality is that search engines – including Google – do retain this information for some time and it’s important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.”

For a public figure to say “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place” is, at the most practical of levels, incredibly rash. You’re essentially extending an open invitation to reporters to publish anything about your life that they can uncover. (Ask Gary Hart.) The statement also paints Schmidt as a hypocrite. In 2005, he threw a legendary hissy fit when CNET’s Elinor Mills, in an article about privacy, published some details about his residence, his finances, and his politics that she had uncovered through Google searches. Google infamously cut off all contact with CNET for a couple of months. Schmidt didn’t seem so casual about the value of privacy when his own was at stake.

The China-based cyber attack, which apparently came to Google’s attention just a few days after the CNBC interview, makes Schmidt’s remarks about privacy and deferring to “the authorities” seem not just foolhardy but reprehensible. When the news reached Schmidt that some Gmail accounts had been compromised, perhaps endangering Chinese dissidents, did he shrug his shoulders and say, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place”? Did he say that Gmail customers need to understand that sometimes “the authorities” will have access to their messages? Judging by Google’s reaction to the attack, it takes the privacy of its own networks extremely seriously – as well it should. The next time Schmidt is asked about privacy, he should remember that.

Of course, Schmidt isn’t the first Silicon Valley CEO to make cavalier comments about privacy. It began back in 1999 when Schmidt’s onetime boss, Sun Microsystems CEO Scott McNealy, proclaimed, “You have zero privacy anyway. Get over it.” Just this month, the idea was repeated by the web’s most amusing philosopher-king, Facebook CEO Mark Zuckerberg. In an on-stage interview, Zuckerberg defended his company’s recent decision to roll back privacy protections on its site by arguing that the desire for privacy was evaporating as a “social norm.” Facebook, said Zuckerberg, was merely responding to that putative shift.

Reading through these wealthy, powerful people’s glib statements on privacy, one begins to suspect that what they’re really talking about is other people’s privacy, not their own. If you exist within a personal Green Zone of private jets, fenced off hideaways, and firewalls maintained by the country’s best law firms and PR agencies, it’s hardly a surprise that you’d eventually come to see privacy more as a privilege than a right. And if your company happens to make its money by mining personal data, well, that’s all the more reason to convince yourself that other people’s privacy may not be so important.

There’s a deeper danger here. The continuing denigration of privacy may begin to warp our understanding of what “privacy” really means. As Bruce Schneier has written, privacy is not just a screen we hide behind when we do something naughty or embarrassing; privacy is “intrinsic to the concept of liberty”:

For if we are observed in all matters, we are constantly under threat of correction, judgment, criticism, even plagiarism of our own uniqueness. We become children, fettered under watchful eyes, constantly fearful that – either now or in the uncertain future – patterns we leave behind will be brought back to implicate us, by whatever authority has now become focused upon our once-private and innocent acts. We lose our individuality, because everything we do is observable and recordable.

Privacy is not only essential to life and liberty; it’s essential to the pursuit of happiness, in the broadest and deepest sense of that phrase. It’s essential, as Schneier implies, to the development of individuality, of unique personality. We human beings are not just social creatures; we’re also private creatures. What we don’t share is as important as what we do share. The way that we choose to define the boundary between our public self and our private self will vary greatly from person to person, which is exactly why it’s so important to be ever vigilant in defending everyone’s ability and power to set that boundary as he or she sees fit. Today, online services and databases play increasingly important roles in our public and our private lives – and in the way we choose to distinguish between them. Many of those services and databases are under corporate control, operated for profit by companies like Google and Facebook. If those companies can’t be trusted to respect and defend the privacy rights of their users, they should be spurned.

Privacy is the skin of the self. Strip it away, and in no time desiccation sets in.

Google and the ethics of the cloud

The New Republic has published my comment on Google’s about-face on China. I reprint it here:

Google is being widely hailed for its announcement yesterday that it will stop censoring its search results in China, even if it means having to abandon that vast market. After years of compromising its own ideals on the free flow of information, the company is at last, it seems, putting its principles ahead of its business interests.

But Google’s motivations are not as pure as they may appear. While there’s almost certainly an ethical component to the company’s decision – Google and its founders have agonized in a very public way over their complicity in Chinese censorship – yesterday’s decision seems to have been spurred more by hard business calculations than soft moral ones. If Google had not, as it revealed in its announcement, “detected a highly sophisticated and targeted attack on our corporate infrastructure originating from China,” there’s no reason to believe it would have altered its policy of censoring search results to fit the wishes of the Chinese authorities. It was the attack, not a sudden burst of righteousness, that spurred Google’s action.

Google’s overriding business goal is to encourage us to devote more of our time and entrust more of our personal information to the Internet, particularly to the online computing cloud that is displacing the PC hard drive as the center of personal computing. The more that we use the Net, the more Google learns about us, the more frequently it shows us its ads, and the more money it makes. In order to continue to expand the time people spend online, Google and other Internet companies have to make the Net feel like a safe, well-protected space. If our trust in the Web is undermined in any way, we’ll retreat from the network and seek out different ways to communicate, compute, and otherwise store and process data. The consequences for Google’s business would be devastating.

Just as the early operators of passenger trains and airlines had, above all else, to convince the public that their services were safe, so Google has to convince the public that the Net is safe. Over the last few years, the company has assumed the role of the Web’s policeman. It encourages people to install anti-virus software on their PCs and take other measures to protect themselves from online crime. It identifies and isolates sites that spread malware. It plays a lead role in coordinating government and industry efforts to enhance network security and monitor and fight cyber attacks.

In this context, the “highly sophisticated” assault that Google says originated from China—it stopped short of blaming the Chinese government, though it said that the effort appeared to be aimed at discovering information about dissidents—threatens the very heart of the company’s business. Google admitted that certain of its customers’ Gmail accounts were compromised, a breach that, if expanded or repeated, would very quickly make all of us think twice before sharing personal information over the Web.

However important the Chinese market may be to Google, in either the short or the long term, it is less important than maintaining the integrity of the Net as a popular medium for information exchange. Like many other Western companies, Google has shown that it is willing to compromise its ideals in order to reach Chinese consumers. What it’s not willing to compromise is the security of the cloud, on which its entire business rests.

It is what you know

“It’s not what you know,” writes Google’s Marissa Mayer, “it’s what you can find out.” That’s as succinct a statement of Google’s intellectual ethic as I’ve come across. Forget “I think, therefore I am.” It’s now “I search, therefore I am.” It’s better to have access to knowledge than to have knowledge. “The Internet empowers,” writes Mayer, with a clumsiness of expression that bespeaks formulaic thought, “better decision-making and a more efficient use of time.”

The late Richard Poirier subtitled his dazzling critical exploration of Robert Frost’s poetry “the work of knowing.” At his best, wrote Poirier, Frost sought “to promote in writing and in reading an inquisitiveness about what cannot quite be signified. He leads us toward a kind of knowing that belongs to dream and reverie on the far side of the labor of mind or of body.” For Google “what cannot quite be signified” does not exist. In place of inquisitiveness we have acquisitiveness: information as commodity, thought as transaction.

“The Internet,” writes Mayer, “can facilitate an incredible persistence and availability of information, but given the Internet’s adolescence, all of the information simply isn’t there yet. I find that in some ways my mind has evolved to this new way of thinking, relying on the information’s existence and availability, so much so that it’s almost impossible to conclude that the information isn’t findable because it just isn’t online.” When Mayer says her “mind has evolved” to the point that it can only recognize and process information that has been digitized and uploaded, she is confessing to undergoing an intellectual dehumanization. She is confessing to being computerized.

Poirier:

[Frost] insists on our acknowledging in each and every poem, however slight, that poetry is a “made” thing. So, too, is truth. Thus, the quality which allows the poetry to seem familiar and recognizable as such, that makes it “beautiful,” is derivative of a larger conviction he shares with the William James of Pragmatism. “Truth,” James insisted, “is not a stagnant property … Truth is made, just as health, wealth and strength are made, in the course of experience.”

It’s not what you can find out, Frost and James and Poirier told us; it’s what you know. Truth is self-created through labor, through the hard, inefficient, unscripted work of the mind, through the indirection of dream and reverie. What matters is what cannot be rendered as code. Google can give you everything but meaning.

Mr. Tracy’s library

Edge’s annual question for 2010 is “How is the Internet changing the way you think?” Some 170 folks submitted answers, including me. (I found it a bit of a challenge, since I wanted to avoid pre-plagiarizing my upcoming book, which happens to be on this subject.) Here’s my submission:

As the school year began last September, Cushing Academy, an elite Massachusetts prep school that’s been around since Civil War days, announced that it was emptying its library of books. In place of the thousands of volumes that had once crowded the building’s shelves, the school was installing, it said, “state-of-the-art computers with high-definition screens for research and reading” as well as “monitors that provide students with real-time interactive data and news feeds from around the world.” Cushing’s bookless library would become, boasted headmaster James Tracy, “a model for the 21st-century school.”

The story gained little traction in the press—it came and went as quickly as a tweet—but to me it felt like a cultural milestone. A library without books would have seemed unthinkable just twenty years ago. Today, the news almost seems overdue. I’ve made scores of visits to libraries over the last couple of years. Every time, I’ve seen more people peering into computer screens than thumbing through pages. The primary role played by libraries today seems to have already shifted from providing access to printed works to providing access to the Internet. There’s every reason to believe that trend will only accelerate.

“When I look at books, I see an outdated technology,” Mr. Tracy told a reporter from the Boston Globe. His charges would seem to agree. A 16-year-old student at the school took the disappearance of the library books in stride. “When you hear the word ‘library,’ you think of books,” she said. “But very few students actually read them.”

What makes it easy for an educational institution like Cushing to jettison its books is the assumption that the words in books are the same whether they’re printed on paper or formed of pixels or E Ink on a screen. A word is a word is a word. “If I look outside my window and I see my student reading Chaucer under a tree,” said Mr. Tracy, giving voice to this common view, “it is utterly immaterial to me whether they’re doing so by way of a Kindle or by way of a paperback.” The medium, in other words, doesn’t matter.

But Mr. Tracy is wrong. The medium does matter. It matters greatly. The experience of reading words on a networked computer, whether it’s a PC, an iPhone, or a Kindle, is very different from the experience of reading those same words in a book. As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It’s designed to scatter our attention. It doesn’t shield us from environmental distractions; it adds to them. The words on a computer screen exist in a welter of contending stimuli.

The human brain, science tells us, adapts readily to its environment. The adaptation occurs at a deep biological level, in the way our nerve cells, or neurons, connect. The technologies we think with, including the media we use to gather, store, and share information, are critical elements of our intellectual environment and they play important roles in shaping our modes of thought. That fact has not only been proven in the laboratory; it’s evident from even a cursory glance at the course of intellectual history. It may be immaterial to Mr. Tracy whether a student reads from a book or a screen, but it is not immaterial to that student’s mind.

My own reading and thinking habits have shifted dramatically since I first logged onto the Web fifteen or so years ago. I now do the bulk of my reading and researching online. And my brain has changed as a result. Even as I’ve become more adept at navigating the rapids of the Net, I have experienced a steady decay in my ability to sustain my attention. As I explained in an Atlantic Monthly essay in 2008, “what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles.” Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.

There are as many human brains as there are human beings. I expect, therefore, that reactions to the Net’s influence, and hence to this year’s Edge question, will span many points of view. Some people will find in the busy interactivity of the networked screen an intellectual environment ideally suited to their mental proclivities. Others will see a catastrophic erosion in the ability of human beings to engage in calmer, more meditative modes of thought. A great many will likely be somewhere between the extremes, thankful for the Net’s riches but worried about its long-term effects on the depth of individual intellect and collective culture.

My own experience leads me to believe that what we stand to lose will be at least as great as what we stand to gain. I feel sorry for the kids at Cushing Academy.

AWS: the new Chicago Edison

The key to running a successful large-scale utility is to match capacity (ie, capital) to demand, and the key to matching capacity to demand is to manipulate demand through pricing. The worst thing for a utility, particularly in the early stages of its growth, is to have unused capacity. At the end of the nineteenth century, Samuel Insull, president of the then-tiny Chicago Edison, started the electric utility revolution when he had the counterintuitive realization that to make more money his company had to cut its prices drastically, at least for those customers whose patterns of electricity use would help the utility maximize its capacity utilization.

Amazon Web Services is emerging as the Chicago Edison of utility computing. Perhaps because its background in retailing gives it a different perspective than that of traditional IT vendors, it has left those vendors in the dust when it comes to pioneering the new network-based model of supplying computing and storage capacity. Late yesterday, the company continued its innovations on the pricing front, announcing a new pricing model aimed at selling spare computing capacity, through its EC2 service, on a moment by moment basis. Buyers can bid for unused compute cycles in what is essentially a spot market for virtual computers. When their bid is higher than the spot price in the market, their virtual machines start running (at the spot price). When their bid falls below the spot price, their machines stop running, and the capacity is reallocated to those customers with higher bids.

Amazon’s spot market promises to significantly reduce the cost of computing tasks that don’t have immediate deadlines, such as large data-mining or other analytical efforts. And it promises to further increase Amazon’s capacity utilization, which will in turn allow Amazon to continue to reduce its prices, attract more customers, further smooth demand, and avoid wasted capital. As Insull discovered, cutting prices to optimize capacity utilization sets a virtuous cycle in motion.

In describing the new “spot instances” plan, AWS chief Werner Vogels used words that could have come out of Insull’s mouth a century ago:

Spot Instances are an innovation that is made possible by the unparalleled economies of scale created by the tremendous growth of the AWS Infrastructure Services. The broad Amazon EC2 customer base brings such diversity in workload and utilization patterns that it allows us to operate Amazon EC2 with extreme efficiency. True to the Amazon philosophy, we let our customers benefit from the economies of scale they help us create by lowering our prices when we achieve lower cost structures. Consistently we have lowered compute, storage and bandwidth prices based on such cost savings.

At Chicago Edison, Insull had nothing to lose. He had recently quit his executive position at Thomas Edison’s General Electric, the dominant player in on-premises electricity generation. No longer subject to the constraints of the old business model, which he had played a crucial role in establishing, he had the freedom to destroy that model. Amazon Web Services is also an outsider in the IT business, unbeholden to the constraints of the established and very lucrative business model, and that is the company’s great advantage.

UPDATE: Jonathan Boutelle, a founder of Slideshare, already has a strategy for gaming AWS’s spot market: bid high, buy low. That should be music to Amazon’s ears. If enough buyers pursue it, the spot price will quickly approach the set price.

Hypermultitasking

The Britannica Blog has been running a forum on multitasking this week, including posts from Maggie Jackson, Howard Rheingold, and Heather Gold. My own small contribution to the discussion appears today and is reprinted below:

Thank God for multitasking. Can you imagine how dull life would be if we humans lacked the ability to rapidly and seamlessly shift our focus from one task or topic to another? We wouldn’t be able to listen to the radio while driving, have conversations while cooking, juggle assignments at work, or even chew gum while walking. The world would grind to a depressing halt.

The ability to multitask is one of the essential strengths of our infinitely amazing brains. We wouldn’t want to lose it. But as neurobiologists and psychologists have shown, and as Maggie Jackson has carefully documented, we pay a price when we multitask. Because the depth of our attention governs the depth of our thought and our memory, when we multitask we sacrifice understanding and learning. We do more but know less. And the more tasks we juggle and the more quickly we switch between them, the higher the cognitive price we pay.

The problem today is not that we multitask. We’ve always multitasked. The problem is that we never stop multitasking. The natural busyness of our lives is being amplified by the networked gadgets that constantly send us messages and alerts, bombard us with other bits of important and trivial information, and generally interrupt the train of our thought. The data barrage never lets up. As a result, we devote ever less time to the calmer, more attentive modes of thinking that have always given richness to our intellectual lives and our culture—the modes of thinking that involve concentration, contemplation, reflection, introspection. The less we practice these habits of mind, the more we risk losing them altogether.

There’s evidence that, as Howard Rheingold suggests, we can train ourselves to be better multitaskers, to shift our attention even more swiftly and fluidly among contending chores and stimuli. And that will surely help us navigate the fast-moving stream of modern life. But improving our ability to multitask, neuroscience tells us in no uncertain terms, will never return to us the depth of understanding that comes with attentive, singleminded thought. You can improve your agility at multitasking, but you will never be able to multitask and engage in deep thought at the same time.