Logical fantasies

From Tim Hwang and Madeleine Clare Elish’s “The Mirage of the Marketplace,” in Slate:

When you open the Uber app as a rider, you see a map of your local pickup area, with little sedans around that appear to be drivers available for a request. While you might assume these reflect an accurate picture of market supply, the way drivers are configured in Uber’s marketplace can be misleading. According to Rosenblat and Stark, the presence of those virtual cars on the passenger’s screen does not necessarily reflect an accurate number of drivers who are physically present or their precise locations. Instead, these phantom cars are part of a “visual effect” that Uber uses to emphasize the proximity of drivers to passengers. Not surprisingly, the visual effect shows cars nearby, even when they might not actually exist. Demand, in this case, sees a simulated picture of supply. Whether you are a driver or a rider, the algorithm operating behind the curtain at Uber shows a through-the-looking-glass version of supply and demand.

From Edward Moore Geist’s “Is Artificial Intelligence Really an Existential Threat to Humanity?,” in the Bulletin of the Atomic Scientists:

For all its entertainment value as a philosophical exercise, Bostrom’s concept of superintelligence is mostly a distraction from the very real ethical and policy challenges posed by ongoing advances in artificial intelligence. Although it has failed so far to realize the dream of intelligent machines, artificial intelligence has been one of the greatest intellectual adventures of the last 60 years. In their quest to understand minds by trying to build them, artificial intelligence researchers have learned a tremendous amount about what intelligence is not.

From John Gray’s “The Friedrich Hayek I Knew, and What He Got Right — and Wrong,” in the New Statesman:

Hayek liked to ridicule the idea that institutions could be designed on the basis of abstract models – a view he criticised as embodying a philosophy of “constructivist rationalism”. Yet his scheme for an ultra-liberal constitution was a prototypical version of the philosophy he had attacked.

It may have been a half-conscious awareness of the limitations of this rationalistic philosophy that fuelled his evolutionary speculations. Underpinning his defence of the free market was a belief in what he called “spontaneous order in society” – the idea that, if only human beings were not subject to oppressive governments, they would evolve in ways that allowed them to live together in peace and freedom. This was not a view held by Hayek’s friend and LSE colleague Karl Popper, who gently demolished it when I talked with him, or by the conservative philosopher Michael Oakeshott, also a colleague at the LSE, who dismissed it – accurately – as “rubbish”. A type of unplanned order may well emerge in society but there is no reason why it should respect liberal values. There is nothing particularly liberal about the Mafia.

From Moira Weigel’s “Fitted,” in The New Inquiry:

FitBit tells us back a story of our lives that has become highly abstract. The difference between the springtime run that you take with two friends and the half hour of jumping jacks that you do in the bathroom after not managing to throw up all of a chicken burger will not register. In this life, steps are steps.

The new behavioralism

cupidinlandscape

We live mythically, even the most rational among us. In the middle of a bromidic Q&A session on Facebook last month, Mark Zuckerberg fielded a question from the cosmologist Stephen Hawking:

I would like to know a unified theory of gravity and the other forces. Which of the big questions in science would you like to know the answer to and why?

Zuckerberg replied that he was “most interested in questions about people,” and he gave some examples of the questions about people that he found most interesting. “What will enable us to live forever?” was one. “How can we empower humans to learn a million times more?” was another.

He then divulged something interesting, if not unexpected, about his perception of the social world:

I’m also curious about whether there is a fundamental mathematical law underlying human social relationships that governs the balance of who and what we all care about. I bet there is.

Call it the Unified Theory of Love.

Zuckerberg’s answer underscores, yet again, what an odd choice we made when we picked a person to oversee the world’s predominant social network. We’ve placed our social lives in the hands of a maladroit young man who believes that human relations and affiliations can be reduced to equations.

The fault, dear Brutus, is not in our stars,
But in ourselves, that we are underlings.

What Brutus saw in stars, Zuckerberg sees in data. Both believe that human affairs are governed by fate.

It’s not hard to understand the source of Zuckerberg’s misperception. Human beings, like ants or chickens, share a certain bundle of tendencies, a certain nature, and if you analyze our behavior statistically that nature will evidence itself in mathematical regularities. Zuckerberg is hardly the first to confuse the measurement of a phenomenon with the cause of the phenomenon. If some amount of data reveals a pattern, then, surely, more data will reveal “a fundamental mathematical law.”

Zuckerberg’s belief that social relations are the output of a cosmic computer running a cosmic algorithm is more than just the self-serving fantasy of a man who has made a fortune by seeing people as nodes in a mathematical graph. It’s an expression, however extreme, of a new form of behavioralism that has recently come into vogue, pulled along in the slipstream of the excitement over “big data.”

From the mid-1950s to the mid-1960s, sociological thinking in the United States was dominated by the behavioralist school. Heirs of the earlier positivists, behavioralists believed that social structures and dynamics could only be understood through the rigorous, scientific analysis of hard data. David Easton, a prominent University of Chicago political scientist, laid out the tenets of the movement in his 1962 article “The Current Meaning of ‘Behavioralism’ in Political Science”:

There are discoverable uniformities in political behavior. These can be expressed in generalizations or theories with explanatory and predictive value. … The validity of such generalizations must be testable in principle, by reference to relevant behavior. … Precision in recording of data and the statement of findings requires measurement and quantification.

The rise of behavioralism reflected a frustration with the perceived subjectivity of traditional modes of sociological and political inquiry, particularly historical analysis and philosophical speculation. History and philosophy, behavioralists believed, led only to ideological bickering, not to unbiased knowledge or reliable solutions to problems. But behavioralism also had technological origins. It was spurred by the post-war arrival of digital computers, machines that promised to open new horizons in the collection and analysis of data on human behavior. Objectivity would replace subjectivity. Technology would replace ideology.

Today’s neobehavioralism has also been inspired by advances in computer technology, particularly the establishment of vast databases of information on people’s behavior and the development of automated statistical techniques to parse the information. The MIT data scientist Alex Pentland, in his revealingly titled 2014 book Social Physics, offered something of a manifesto for the new behavioralism, using terms that, consciously or not, echoed what was heard in the early 60s:

We need to move beyond merely describing social structure to building a causal theory of social structure. Progress in developing this represents steps toward what [neuroscientist] David Marr called a computational theory of behavior: a mathematical explanation of why society reacts as it does and how these reactions may (or may not) solve human problems. … Such a theory could tie together mechanisms of social interactions with our newly acquired massive amounts of behavior data in order to engineer better social systems.

As with their predecessors, today’s neobehavioralists also view the scientific analysis of “big data” as a means of escaping subjective modes of sociological inquiry and the ideological baggage those modes often carry. “The importance of a science of social physics,” Pentland suggested, goes beyond “its utility in providing accurate, useful mathematical predictions.” It promises to provide “a language that is better than the old vocabulary of markets and classes, capital and production”:

Words such as “markets,” “political classes,” and “social movements” shape our thinking about the world. They are useful, of course, but they also represent overly simplistic thinking; they therefore limit our ability to think clearly and effectively. [Big data offers] a new set of concepts with which I believe we can more accurately discuss our world and plan the future.

Zuckerberg will lose his bet, and Pentland and the other neobehavioralists will not discover “a causal theory of social structure” that can be expressed in the pristine language of mathematics. Neobehavioralism will, like behavioralism before it, fall short of its lofty goals, even if it does provide valuable insights into social dynamics. Despite, or because of, their subjective messiness, history and philosophy will continue to play central roles in the exploration of what makes all of us tick. The end of ideology is not nigh.

But there is something that sets neobehavioralism apart from behavioralism. The collection of behavioral data today generates great commercial value along with its value in social research, and there’s an inevitable tension between the data’s scientific and commercial exploitation. That tension will shadow any attempt to, as Pentland put it, “engineer better social systems.” Better for whom, and by what measure? Even if no fundamental mathematical law of social relationships is in the offing, the ability to closely monitor and influence those relationships will continue to provide rich profit potential. One suspects that Zuckerberg’s dream of a Unified Theory of Love is inspired less by cupid than by cupidity.

Image: Detail of Sodoma’s “Cupid in the Landscape.”

The end of corporate computing (10th anniversary edition)

electricitybuilding

Last week, in its quarterly earnings report, Amazon.com revealed for the first time how much money its cloud computing operation, Amazon Web Services, takes in. The numbers were impressive. AWS has become an $8 billion business, and its revenues continue to grow swiftly, nearly doubling in the most recent quarter from the same period last year. The unit’s profit margin — a surprisingly robust 21 percent — is vastly wider than that of the company’s retailing operation. Indeed, without AWS, Amazon would have lost a lot of money in the quarter instead of posting a narrow profit.

AWS’s results show how well established “the cloud” has become. Most personal computing these days relies on cloud services — lose your connection, and your computing device becomes pretty much useless — and businesses, too, are looking more and more to the cloud, rather than their own data centers, to fill their information technology needs. It’s easy to forget how quickly this epochal shift in the nature of computing has occurred. Just ten years ago, the term “cloud computing” was unknown, and the idea that computing would become a centrally managed utility service was considered laughable by many big IT companies and their customers. Back then, in 2005, I wrote an article for MIT’s Sloan Management Review titled “The End of Corporate Computing” in which I argued that computing was fated to become a utility, with big, central data centers feeding services to customers over the internet’s grid. (The article inspired my 2008 book The Big Switch.) I got plenty of things wrong in the article, but I think the ensuing ten years have shown that the piece was fundamentally on target in predicting the rise of what we now call the cloud. So here, to mark the tenth birthday of the article, is the full text of “The End of Corporate Computing.”

Something happened in the first years of the 20th century that would have seemed unthinkable just a few decades earlier: Manufacturers began to shut down and dismantle their waterwheels, steam engines and electric generators. Since the beginning of the Industrial Age, mills and factories had had no choice but to maintain private power plants to run their machinery — power generation was a seemingly intrinsic part of doing business — but as the new century dawned, an alternative was emerging. Dozens of fledgling electricity producers were erecting central generating stations and using a network of wires to distribute their power to distant customers. Manufacturers no longer had to run their own dynamos; they could simply buy the electricity they needed, as they required it, from the new suppliers. Power generation was being transformed from a corporate function into a utility.

Now, almost exactly a century later, history is repeating itself. The most important commercial development of the last 50 years — information technology — is undergoing a similar transformation. It, too, is beginning an inexorable shift from being an asset that companies own — in the form of computers, software and myriad related components —to being a service that they purchase from utility providers. Few in the business world have contemplated the full magnitude of this change or its far-reaching consequences. To date, popular discussions of utility computing have rarely progressed beyond a recitation of IT vendors’ marketing slogans, laden with opaque terms like “autonomic systems,” “server virtualization” and “service-oriented architecture” [1]. Rather than illuminate the future, such gobbledygook has only obscured it.

The prevailing rhetoric is, moreover, too conservative. It assumes that the existing model of IT supply and use — and the corporate data center that lies at its core — will endure. But that view is perilously short-sighted. The traditional model’s economic foundation is already crumbling, and it is unlikely to survive in the long run. As the earlier transformation of electricity supply suggests, IT’s shift from a fragmented capital asset to a centralized utility service will be a momentous one. It will overturn strategic and operating assumptions, alter industrial economics, upset markets, and pose daunting challenges to every user and vendor. The history of the commercial application of information technology has been characterized by astounding leaps, but nothing that has come before — not even the introduction of the personal computer or the opening of the Internet — will match the upheaval that lies just over the horizon. Continue reading

In the kingdom of the bored, the one-armed bandit is king

interface

It still feels a little shameful to admit to the fact, but what engages us more and more is not the content but the mechanism. Kenneth Goldsmith, in a Los Angeles Review of Books essay, writes of a recent day when he felt an urge to listen to some music by the American composer Morton Feldman:

I dug into my MP3 drive, found my Feldman folder and opened it up. Amongst the various folders in the directory was one labeled “The Complete Works of Morton Feldman.” I was surprised to see it there; I didn’t remember downloading it. Curious, I looked at its date — 2009 — and realized that I must’ve grabbed it during the heyday of MP3 sharity blogs. I opened it to find 79 albums as zipped files. I unzipped three of them, listened to part of one, and closed the folder. I haven’t opened it since.

The pleasure of listening to music was not as great as he anticipated. He found more pleasure in manipulating music files.

Our role as librarians and archivists has outpaced our role as cultural consumers. Engaging with media in a traditional sense is often the last thing we do. … In the digital ecosystem, the apparatuses surrounding the artifact are more engaging than the artifact itself. Management (acquisition, distribution, archiving, filing, redundancy) is the cultural artifact’s new content. … In an unanticipated twist to John Perry Barlow’s 1994 prediction that in the digital age we’d be able to enjoy wine without the bottles, we’ve now come to prefer the bottles to the wine.

It’s as though we find ourselves, suddenly, in a vast library, an infinite library, a library of Borgesian proportions, and we discover that what’s of most interest to us is not the books on the shelves but the intricacies of the Dewey Decimal System. Continue reading

How to write a book when you’re paid by the page

robot

When I first heard that Amazon was going to start paying its Kindle Unlimited authors according to the number of pages in their books that actually get read, I wondered whether there might be an opportunity for an intra-Amazon arbitrage scheme that would allow me to game the system and drain Jeff Bezos’s bank account. I thought I might be able to start publishing long books of computer-generated gibberish and then use Amazon’s Mechanical Turk service to pay Third World readers to scroll through the pages at a pace that would register each page as having been read. If I could pay the Turkers a fraction of a penny less to look at a page than Amazon paid me for the “read” page, I’d be able to get really rich and launch my own space exploration company.

Alas, I couldn’t make the numbers work. Amazon draws the royalties for the program from a fixed pool of funds, which serves to cap the upside for devious scribblers.

So much for my Mars vacation. Still, even in a zero-sum game that pits writer against writer, I figured I might be able to steal a few pennies from the pockets of my fellow authors. (I hate them all, anyway.) I would just need to do a better job of mastering the rules of the game, which Amazon was kind enough to lay out for me:

Under the new payment method, you’ll be paid for each page individual customers read of your book, the first time they read it. … To determine a book’s page count in a way that works across genres and devices, we’ve developed the Kindle Edition Normalized Page Count (KENPC). We calculate KENPC based on standard settings (e.g. font, line height, line spacing, etc.), and we’ll use KENPC to measure the number of pages customers read in your book, starting with the Start Reading Location (SRL) to the end of your book.

The first thing that has to be said is that if you’re a poet, you’re screwed. That page-normalization deal is going to kill you. I mean, Walt Whitman might do okay. But Mary Oliver? Totally hosed. So that manuscript of dense, trimetric verse you’ve been fussing over for the last twenty years? Shred it. Continue reading

Music is the oil in the human machine

oil

In announcing the free version of its music streaming service — that’s free as in ads — Google also discloses something revealing about the way it views music:

At any moment in your day, Google Play Music has whatever you need music for — from working, to working out, to working it on the dance floor — and gives you curated radio stations to make whatever you’re doing better. Our team of music experts, including the folks who created Songza, crafts each station song by song so you don’t have to.

This marks a continuation of Google’s promotion of what it terms “activity-based” music. Last year, soon after it acquired Songza, a company that specializes in “curating” playlists to suit particular moods and activities, Google rejiggered its music service to emphasize its practicality:

If you’re a Google Play Music subscriber, next time you open the app you’ll be prompted to play music for a time of day, mood or activity. Choose an activity to get options for several music stations to make whatever you’re doing even better — whether it’s a station for a morning workout, songs to relieve stress during traffic, or the right mix for cooking with friends. Each station has been handcrafted — song by song — by our team of music experts (dozens of DJs, musicians, music critics and ethnomusicologists) to give you the exact right song for the moment.

This is the democratization of the Muzak philosophy. Music becomes an input, a factor of production. Listening to music is not itself an “activity” — music isn’t an end in itself — but rather an enhancer of other activities, each of which must be clearly demarcated. (As I’ve argued before, the fuzziness of human experience is anathema to Silicon Valley. Before you can code it, you have to formalize it.) Continue reading

When triumphalists fail, they fail triumphantly

paved

Progress turns everyone into a nostalgist sooner or later. You just have to wait for your own particular trigger to come along — the new thing that threatens the old thing you love.

David Weinberger has a new article in The Atlantic called “The Internet That Was (and Still Could Be).” It’s a tortured and ultimately dishonest piece that calls to mind some lines from a great old Buzzcocks tune:

About the future I only can reminisce
For what I’ve had is what I’ll never get
And although this may sound strange
My future and my past are presently disarranged
And I’m surfing on a wave of nostalgia
For an age yet to come.

Weinberger, coauthor of The Cluetrain Manifesto and author of Small Pieces Loosely Joined, has long argued that the “architecture” of the internet provides not only a metaphor but an actual working model for a more perfect society. The net was created with data-communication protocols that enabled “packets of information [to be moved] around without any central management or control,” and that technical architecture, he contends, not only facilitates but promotes democratic values such as “open access to information” and “the permission-free ability to read and to post.” Spanning civil and commercial interests, the net is “an open market of ideas and businesses” that provides “a framework for bottom-up collaboration among equals.” Continue reading