What algorithms want


Here’s another brief excerpt from my new essay, “The Manipulators: Facebook’s Social Engineering Project,” in the Los Angeles Review of Books:

We have had a hard time thinking clearly about companies like Google and Facebook because we have never before had to deal with companies like Google and Facebook. They are something new in the world, and they don’t fit neatly into our existing legal and cultural templates. Because they operate at such unimaginable magnitude, carrying out millions of informational transactions every second, we’ve tended to think of them as vast, faceless, dispassionate computers — as information-processing machines that exist outside the realm of human intention and control. That’s a misperception, and a dangerous one.

Modern computers and computer networks enable human judgment to be automated, to be exercised on a vast scale and at a breathtaking pace. But it’s still human judgment. Algorithms are constructed by people, and they reflect the interests, biases, and flaws of their makers. As Google’s founders themselves pointed out many years ago, an information aggregator operated for commercial gain will inevitably be compromised and should always be treated with suspicion. That is certainly true of a search engine that mediates our intellectual explorations; it is even more true of a social network that mediates our personal associations and conversations.

Because algorithms impose on us the interests and biases of others, we have not only a right but an obligation to carefully examine and, when appropriate, judiciously regulate those algorithms. We have a right and an obligation to understand how we, and our information, are being manipulated. To ignore that responsibility, or to shirk it because it raises hard problems, is to grant a small group of people — the kind of people who carried out the Facebook and OKCupid experiments — the power to play with us at their whim.

What algorithms want is what the people who write algorithms want. Appreciating that, and grappling with the implications, strikes me as one of the great challenges now lying before us.

Image: “abacus” by Jenny Downing.

3 thoughts on “What algorithms want

  1. Nate

    I would actually make the case that what the algorithm wants is not necessarily what the people who originally wrote the system want – it’s just as much out of their control, even if they may believe otherwise. These algorithms apply simple rules to interactions, but the resulting complexity leads to completely unpredictable results. One can use the n-body problem in physics as an analogy: as connections and interactions between elements grow (n increases), the ability to predict those interactions becomes exponentially more difficult. Add in constantly changing rules that govern those interactions and the ability to predict with any accuracy is completely thrown out the window.

    One only need look to the bureaucratic systems created in business and government over the last 100 years, designed to solve, in general, what appear on the surface to be simpler problems. Marketing departments spend millions on ad campaigns that barely move the needle, but a ridiculous video unpredictably goes viral and sales skyrocket.

    The belief that if we are just smart enough we can write a law, business strategy, or computer algorithm to plan for complex outcomes is the great hubris of our time.

    I agree – transparency is absolutely necessary. But the belief that we could possibly design any kind regulation to ensure an algorithm doesn’t do something is simply magical thinking – the designers of these algorithms may have fooled themselves into thinking they have solved human behavior, but 2000 years of human history begs to differ…

  2. Nick Post author

    Nate, Your point about unintended and unplanned-for consequences is well taken. That’s certainly an important part of the story. Thanks, Nick

  3. Brutus

    Like molecular physics, human behavior is impossible to predict on the fine-grained level of the individual, which is essentially the failure of behaviorism, but I daresay that group/mob behavior is surprisingly malleable at some undefined level. That’s one of the main teachings of Edward Bernays, which continues to motivate Machiavellian CEOs, Madison Ave. marketers, and political operatives of all stripes. Then a funny thing happens: the Wild Wild West reasserts itself as history, and social engineers encounter profound difficulty channeling and containing the various beasts set loose purposely. The sweet spot in the middle ground doesn’t completely overwhelm the foreground or background, but it’s potent enough to cause quite a lot of trouble.

Comments are closed.