Category Archives: Uncategorized

The Donald and The Swarm

In a tweeted response to my Politico essay on social media’s influence on the 2016 campaign, The Atlantic‘s political editor, Yoni Appelbaum, suggests that cable TV, rather than social media, is the real driver of the Trump phenomenon. He offers a chart as backup:

I think Appelbaum may be mistaking effect for cause. Yes, Trump has dominated cable coverage over the last month. But Trump has dominated all coverage over the last month. Pull together the same chart for, say, print news or radio news — perhaps even for — and you’ll almost certainly see a similar picture. The news media is a swarm organism. No individual medium operates in isolation.

Raw measures of media coverage, in other words, reveal who’s getting covered, but they don’t say much about why coverage is playing out the way it is.  Continue reading

The coming of the Snapchat candidate


In Western politics, we no longer get to experience the fun of a revolution, but at least we get the occasional media revolution. In an essay out today at Politico, I argue that we’re at the start of the third big media makeover of modern political campaigns. First came radio in the twenties, then TV in the sixties. Now it’s social media that’s changing the tone and tenor of elections. The Donald may burn out soon, but the inability of both the press and his adversaries to make sense of the Trump campaign suggests that the rules have changed. The tidy narratives of TV campaigns are yesterday’s news.

Here’s a bit from the piece:

What’s important now is not so much image as personality. But, as the Trump phenomenon reveals, it’s only a particular kind of personality that works — one that’s big enough to grab the attention of the perpetually distracted but small enough to fit neatly into a thousand tiny media containers. It might best be described as a Snapchat personality. It bursts into focus at regular intervals without ever demanding steady concentration.

Social media favors the bitty over the meaty, the cutting over the considered. It also prizes emotionalism over reason. The more visceral the message, the more quickly it circulates and the longer it holds the darting public eye. In something of a return to the pre-radio days, the fiery populist now seems more desirable, more worthy of attention, than the cool wonk. It’s the crusty Bernie and the caustic Donald that get hearted and hash-tagged, friended and followed. Is it any wonder that “Feel the Bern” has become the rallying cry of the Sanders campaign?

Emotional appeals can be good for politics. They can spur civic involvement, even among the disenfranchised and disenchanted. And they can galvanize public attention, focusing it on injustices and abuses of power. An immediate emotional connection can, at best, deepen into a sustained engagement with the political process. But there’s a dark side to social media’s emotionalism. Trump’s popularity took off only after he demonized Mexican immigrants, playing to the public’s frustrations and fears. That’s the demagogue’s oldest tactic, and it worked. The Trump campaign may have qualities of farce, but it also suggests that a Snapchat candidate, passionate yet hollow, could be a perfect vessel for a cult of personality.

Here’s the rest.

Image: Dick Nixon reacts to the arrival of the TV era.

Logical fantasies

From Tim Hwang and Madeleine Clare Elish’s “The Mirage of the Marketplace,” in Slate:

When you open the Uber app as a rider, you see a map of your local pickup area, with little sedans around that appear to be drivers available for a request. While you might assume these reflect an accurate picture of market supply, the way drivers are configured in Uber’s marketplace can be misleading. According to Rosenblat and Stark, the presence of those virtual cars on the passenger’s screen does not necessarily reflect an accurate number of drivers who are physically present or their precise locations. Instead, these phantom cars are part of a “visual effect” that Uber uses to emphasize the proximity of drivers to passengers. Not surprisingly, the visual effect shows cars nearby, even when they might not actually exist. Demand, in this case, sees a simulated picture of supply. Whether you are a driver or a rider, the algorithm operating behind the curtain at Uber shows a through-the-looking-glass version of supply and demand.

From Edward Moore Geist’s “Is Artificial Intelligence Really an Existential Threat to Humanity?,” in the Bulletin of the Atomic Scientists: Continue reading

The new behavioralism


We live mythically, even the most rational among us. In the middle of a bromidic Q&A session on Facebook last month, Mark Zuckerberg fielded a question from the cosmologist Stephen Hawking:

I would like to know a unified theory of gravity and the other forces. Which of the big questions in science would you like to know the answer to and why?

Zuckerberg replied that he was “most interested in questions about people,” and he gave some examples of the questions about people that he found most interesting. “What will enable us to live forever?” was one. “How can we empower humans to learn a million times more?” was another.

He then divulged something interesting, if not unexpected, about his perception of the social world:

I’m also curious about whether there is a fundamental mathematical law underlying human social relationships that governs the balance of who and what we all care about. I bet there is.

Call it the Unified Theory of Love.

Zuckerberg’s answer underscores, yet again, what an odd choice we made when we picked a person to oversee the world’s predominant social network. We’ve placed our social lives in the hands of a maladroit young man who believes that human relations and affiliations can be reduced to equations.

The fault, dear Brutus, is not in our stars,
But in ourselves, that we are underlings.

What Brutus saw in stars, Zuckerberg sees in data. Both believe that human affairs are governed by fate.

It’s not hard to understand the source of Zuckerberg’s misperception. Human beings, like ants or chickens, share a certain bundle of tendencies, a certain nature, and if you analyze our behavior statistically that nature will evidence itself in mathematical regularities. Zuckerberg is hardly the first to confuse the measurement of a phenomenon with the cause of the phenomenon. If some amount of data reveals a pattern, then, surely, more data will reveal “a fundamental mathematical law.”

Zuckerberg’s belief that social relations are the output of a cosmic computer running a cosmic algorithm is more than just the self-serving fantasy of a man who has made a fortune by seeing people as nodes in a mathematical graph. It’s an expression, however extreme, of a new form of behavioralism that has recently come into vogue, pulled along in the slipstream of the excitement over “big data.”

From the mid-1950s to the mid-1960s, sociological thinking in the United States was dominated by the behavioralist school. Heirs of the earlier positivists, behavioralists believed that social structures and dynamics could only be understood through the rigorous, scientific analysis of hard data.* David Easton, a prominent University of Chicago political scientist, laid out the tenets of the movement in his 1962 article “The Current Meaning of ‘Behavioralism’ in Political Science”:

There are discoverable uniformities in political behavior. These can be expressed in generalizations or theories with explanatory and predictive value. … The validity of such generalizations must be testable in principle, by reference to relevant behavior. … Precision in recording of data and the statement of findings requires measurement and quantification.

The rise of behavioralism reflected a frustration with the perceived subjectivity of traditional modes of sociological and political inquiry, particularly historical analysis and philosophical speculation. History and philosophy, behavioralists believed, led only to ideological bickering, not to unbiased knowledge or reliable solutions to problems. But behavioralism also had technological origins. It was spurred by the post-war arrival of digital computers, machines that promised to open new horizons in the collection and analysis of data on human behavior. Objectivity would replace subjectivity. Technology would replace ideology.

Today’s neobehavioralism has also been inspired by advances in computer technology, particularly the establishment of vast databases of information on people’s behavior and the development of automated statistical techniques to parse the information. The MIT data scientist Alex Pentland, in his revealingly titled 2014 book Social Physics, offered something of a manifesto for the new behavioralism, using terms that, consciously or not, echoed what was heard in the early 60s:

We need to move beyond merely describing social structure to building a causal theory of social structure. Progress in developing this represents steps toward what [neuroscientist] David Marr called a computational theory of behavior: a mathematical explanation of why society reacts as it does and how these reactions may (or may not) solve human problems. … Such a theory could tie together mechanisms of social interactions with our newly acquired massive amounts of behavior data in order to engineer better social systems.

As with their predecessors, today’s neobehavioralists also view the scientific analysis of “big data” as a means of escaping subjective modes of sociological inquiry and the ideological baggage those modes often carry. “The importance of a science of social physics,” Pentland suggested, goes beyond “its utility in providing accurate, useful mathematical predictions.” It promises to provide “a language that is better than the old vocabulary of markets and classes, capital and production”:

Words such as “markets,” “political classes,” and “social movements” shape our thinking about the world. They are useful, of course, but they also represent overly simplistic thinking; they therefore limit our ability to think clearly and effectively. [Big data offers] a new set of concepts with which I believe we can more accurately discuss our world and plan the future.

Zuckerberg will lose his bet, and Pentland and the other neobehavioralists will not discover “a causal theory of social structure” that can be expressed in the pristine language of mathematics. Neobehavioralism will, like behavioralism before it, fall short of its lofty goals, even if it does provide valuable insights into social dynamics. Despite, or because of, their subjective messiness, history and philosophy will continue to play central roles in the exploration of what makes all of us tick. The end of ideology is not nigh.

But there is something that sets neobehavioralism apart from behavioralism. The collection of behavioral data today generates great commercial value along with its value in social research, and there’s an inevitable tension between the data’s scientific and commercial exploitation. That tension will shadow any attempt to, as Pentland put it, “engineer better social systems.” Better for whom, and by what measure? Even if no fundamental mathematical law of social relationships is in the offing, the ability to closely monitor and influence those relationships will continue to provide rich profit potential. One suspects that Zuckerberg’s dream of a Unified Theory of Love is inspired less by cupid than by cupidity.



*Though the two are related, behavioralism shouldn’t be confused with behaviorism, the psychological movement popular earlier in the twentieth century.

Image: Detail of Sodoma’s “Cupid in the Landscape.”

The end of corporate computing (10th anniversary edition)


Last week, in its quarterly earnings report, revealed for the first time how much money its cloud computing operation, Amazon Web Services, takes in. The numbers were impressive. AWS has become an $8 billion business, and its revenues continue to grow swiftly, nearly doubling in the most recent quarter from the same period last year. The unit’s profit margin — a surprisingly robust 21 percent — is vastly wider than that of the company’s retailing operation. Indeed, without AWS, Amazon would have lost a lot of money in the quarter instead of posting a narrow profit.

AWS’s results show how well established “the cloud” has become. Most personal computing these days relies on cloud services — lose your connection, and your computing device becomes pretty much useless — and businesses, too, are looking more and more to the cloud, rather than their own data centers, to fill their information technology needs. It’s easy to forget how quickly this epochal shift in the nature of computing has occurred. Just ten years ago, the term “cloud computing” was unknown, and the idea that computing would become a centrally managed utility service was considered laughable by many big IT companies and their customers. Back then, in 2005, I wrote an article for MIT’s Sloan Management Review titled “The End of Corporate Computing” in which I argued that computing was fated to become a utility, with big, central data centers feeding services to customers over the internet’s grid. (The article inspired my 2008 book The Big Switch.) I got plenty of things wrong in the article, but I think the ensuing ten years have shown that the piece was fundamentally on target in predicting the rise of what we now call the cloud. So here, to mark the tenth birthday of the article, is the full text of “The End of Corporate Computing.”

Something happened in the first years of the 20th century that would have seemed unthinkable just a few decades earlier: Manufacturers began to shut down and dismantle their waterwheels, steam engines and electric generators. Since the beginning of the Industrial Age, mills and factories had had no choice but to maintain private power plants to run their machinery — power generation was a seemingly intrinsic part of doing business — but as the new century dawned, an alternative was emerging. Dozens of fledgling electricity producers were erecting central generating stations and using a network of wires to distribute their power to distant customers. Manufacturers no longer had to run their own dynamos; they could simply buy the electricity they needed, as they required it, from the new suppliers. Power generation was being transformed from a corporate function into a utility.

Now, almost exactly a century later, history is repeating itself. The most important commercial development of the last 50 years — information technology — is undergoing a similar transformation. It, too, is beginning an inexorable shift from being an asset that companies own — in the form of computers, software and myriad related components —to being a service that they purchase from utility providers. Few in the business world have contemplated the full magnitude of this change or its far-reaching consequences. To date, popular discussions of utility computing have rarely progressed beyond a recitation of IT vendors’ marketing slogans, laden with opaque terms like “autonomic systems,” “server virtualization” and “service-oriented architecture” [1]. Rather than illuminate the future, such gobbledygook has only obscured it.

The prevailing rhetoric is, moreover, too conservative. It assumes that the existing model of IT supply and use — and the corporate data center that lies at its core — will endure. But that view is perilously short-sighted. The traditional model’s economic foundation is already crumbling, and it is unlikely to survive in the long run. As the earlier transformation of electricity supply suggests, IT’s shift from a fragmented capital asset to a centralized utility service will be a momentous one. It will overturn strategic and operating assumptions, alter industrial economics, upset markets, and pose daunting challenges to every user and vendor. The history of the commercial application of information technology has been characterized by astounding leaps, but nothing that has come before — not even the introduction of the personal computer or the opening of the Internet — will match the upheaval that lies just over the horizon. Continue reading

In the kingdom of the bored, the one-armed bandit is king


It still feels a little shameful to admit to the fact, but what engages us more and more is not the content but the mechanism. Kenneth Goldsmith, in a Los Angeles Review of Books essay, writes of a recent day when he felt an urge to listen to some music by the American composer Morton Feldman:

I dug into my MP3 drive, found my Feldman folder and opened it up. Amongst the various folders in the directory was one labeled “The Complete Works of Morton Feldman.” I was surprised to see it there; I didn’t remember downloading it. Curious, I looked at its date — 2009 — and realized that I must’ve grabbed it during the heyday of MP3 sharity blogs. I opened it to find 79 albums as zipped files. I unzipped three of them, listened to part of one, and closed the folder. I haven’t opened it since.

The pleasure of listening to music was not as great as he anticipated. He found more pleasure in manipulating music files.

Our role as librarians and archivists has outpaced our role as cultural consumers. Engaging with media in a traditional sense is often the last thing we do. … In the digital ecosystem, the apparatuses surrounding the artifact are more engaging than the artifact itself. Management (acquisition, distribution, archiving, filing, redundancy) is the cultural artifact’s new content. … In an unanticipated twist to John Perry Barlow’s 1994 prediction that in the digital age we’d be able to enjoy wine without the bottles, we’ve now come to prefer the bottles to the wine.

It’s as though we find ourselves, suddenly, in a vast library, an infinite library, a library of Borgesian proportions, and we discover that what’s of most interest to us is not the books on the shelves but the intricacies of the Dewey Decimal System. Continue reading

How to write a book when you’re paid by the page


When I first heard that Amazon was going to start paying its Kindle Unlimited authors according to the number of pages in their books that actually get read, I wondered whether there might be an opportunity for an intra-Amazon arbitrage scheme that would allow me to game the system and drain Jeff Bezos’s bank account. I thought I might be able to start publishing long books of computer-generated gibberish and then use Amazon’s Mechanical Turk service to pay Third World readers to scroll through the pages at a pace that would register each page as having been read. If I could pay the Turkers a fraction of a penny less to look at a page than Amazon paid me for the “read” page, I’d be able to get really rich and launch my own space exploration company.

Alas, I couldn’t make the numbers work. Amazon draws the royalties for the program from a fixed pool of funds, which serves to cap the upside for devious scribblers.

So much for my Mars vacation. Still, even in a zero-sum game that pits writer against writer, I figured I might be able to steal a few pennies from the pockets of my fellow authors. (I hate them all, anyway.) I would just need to do a better job of mastering the rules of the game, which Amazon was kind enough to lay out for me:

Under the new payment method, you’ll be paid for each page individual customers read of your book, the first time they read it. … To determine a book’s page count in a way that works across genres and devices, we’ve developed the Kindle Edition Normalized Page Count (KENPC). We calculate KENPC based on standard settings (e.g. font, line height, line spacing, etc.), and we’ll use KENPC to measure the number of pages customers read in your book, starting with the Start Reading Location (SRL) to the end of your book.

The first thing that has to be said is that if you’re a poet, you’re screwed. That page-normalization deal is going to kill you. I mean, Walt Whitman might do okay. But Mary Oliver? Totally hosed. So that manuscript of dense, trimetric verse you’ve been fussing over for the last twenty years? Shred it. Continue reading