The explainable

calculatingpencil

Signature has an interview with Denis Boyles about his new book on the eleventh edition of the Encyclopedia Britannica, Everything Explained That Is Explainable. The title of Boyles’s book was one of the marketing slogans used to sell the encyclopedia when it went on sale in 1910 and 1911. In the interview, Boyles talks about how the monumental reference work was very much a reflection of its time:

Signature: It was a period of great change. The 11th was essentially published in the heart of the Progressive Era.  How did that impact its success?

Boyles: That really is the subject of the 11th. When considered as a book, it’s something like forty million words, but the topic is a singular one: progress. It tells you all the different ways progress can be seen. And it was secular, overwhelmingly so. The 11th was all about what could be measured, what could be known.  And that made progress essentially the turf of technicians and scientists and technical and scientific advances became confused with progress. The latest idea was the best idea. Now, we’re far enough into this to realize that the “latest idea” is just another idea. It may be good, it may not. We’re more weary [sic] as a society, but largely still as secular.

I sense that that last bit, about how we’ve moved beyond the assumption that the latest idea is the best idea, may be wishful thinking on Boyles’s part, or at least a reflection of the fact that he lives in Europe. Here in the U.S. we seem more than ever convinced that progress is “essentially the turf of technicians and scientists.” We see the newness of an idea as the idea’s validation, novelty being contemporary American culture’s central criterion.

Boyles points out that the Britannica’s eleventh edition underpins Wikipedia, and in Wikipedia we see, more clearly than ever, the elevation of and emphasis on measurement as the standard of knowledge and knowability. Wikipedia is pretty good, and ambitiously thorough, on technical and scientific topics, but it’s scattershot, and often just flat-out bad, in its coverage of topics in the humanities. Wikipedia’s editors, as Edward Mendelson has recently suggested, are comfortable in documenting consensus but completely uncomfortable in exercising taste. The kind of informed subjective judgment that is essential to any perceptive discussion of art, literature, or even history is explicitly outlawed at Wikipedia. And Wikipedia, like the eleventh edition of the Britannica, is a reflection of its time. The boundary we draw around “the explainable” is tighter than ever.

“Technical and scientific advances became confused with progress,” says Boyles, and so it is today, a century later.

Technological unemployment, then and now

robot2

In “Promise and Peril of Automation,” an article in the New York Times, David Morse writes:

The key area of social change stimulated by automation is employment. Everywhere one finds two things: a positive emphasis on opportunity and a keen sensitivity to change, often translated more concretely into fear.

The emphasis on opportunity is welcome and indicative of the climate in which automation will come to maturity. The sensitivity to change is equally significant. If fears about the future, especially job worries, are dismissed as “unreal” or “unimportant,” human resistance to change will be a major impediment to deriving full social benefit from automation.

What is the basis for these fears? Partly, a natural human uneasiness in the face of the unknown. Partly, the fact that few things are more serious to a worker than unemployment. Partly, too, the fear that automation undercuts the whole employment structure on which society as we know it is based. If, for example, automation cuts direct labor, often by 50 percent or more, and if this goes on from one industry to another, what happens? Even with shorter hours and new opportunities, will not a saturation point be reached, with old jobs disappearing faster than new ones are created, and unemployment on a wide scale raising its ugly head and creeping from one undertaking and industry to another?

Morse’s article was published on June 9, 1957.

Today, nearly sixty years later, the Times is running a new article on the specter of technological unemployment, by Eduardo Porter. He writes:

[Lawrence Summers] reminisced about his undergraduate days at M.I.T. in the 1970s, when the debate over the idea of technological unemployment pitted “smart people,” exemplified by the great economist Robert Solow, and “stupid people,” “exemplified by a bunch of sociologists.”

It was stupid to think technological progress would reduce employment. If technology increased productivity — allowing companies and their workers to make more stuff in less time — people would have more money to spend on more things that would have to be made, creating jobs for other people.

But at some point Mr. Summers experienced an epiphany. “It sort of occurred to me,” he said. “Suppose the stupid people were right. What would it look like?” And what it looked like fits pretty well with what the world looks like today.

The fears about automation’s job-killing potential that erupted in the 1950s didn’t pan out. That’s one reason why smart economists — no, it’s not an oxymoron — became so convinced that technological unemployment, as a broad rather than a local phenomenon, was mythical. But yesterday’s automation is not today’s automation. What if a new wave of computer-generated automation, rather than putting more money into the hands of masses of consumers, ended up concentrating that wealth, in the form of greater profits, into the hands of a rather small group of plutocrats who owned and controlled the means of automation? And what if automation’s reach extended so far into the human skill set that the range of jobs immune to automation was no longer sufficient to absorb displaced workers? There may not be a “lump of labor,” but we may discover that there is a “lump of skills.”

Henry Ford increased the hourly wage of workers beyond what was economically necessary because he knew that the workers would use the money to buy Ford cars. He saw that he had an interest in broadening prosperity. It seems telling that it has now become popular among the Silicon Valley elite to argue that the government should step in and start paying people a universal basic income. With a universal basic income, even the unemployed would still be able to afford their smartphone data plans.

Image: detail of “Friend or Foe?” by Leslie Illingworth.

The width of now

micrometer

“Human character changed on or about December 2010,” writes Edward Mendelson in “In the Depths of the Digital Age,” when “everyone, it seemed, started carrying a smartphone.”

For the first time, practically anyone could be found and intruded upon, not only at some fixed address at home or at work, but everywhere and at all times. Before this, everyone could expect, in the ordinary course of the day, some time at least in which to be left alone, unobserved, unsustained and unburdened by public or familial roles. That era now came to an end.

The self exploded as the social world imploded. The fuse had been burning for a long time, of course.

Mendelson continues:

In Thomas Pynchon’s Gravity’s Rainbow (1973), an engineer named Kurt Mondaugen enunciates a law of human existence: “Personal density … is directly proportional to temporal bandwidth.” The narrator explains: “’Temporal bandwidth’ is the width of your present, your now. … The more you dwell in the past and future, the thicker your bandwidth, the more solid your persona. But the narrower your sense of Now, the more tenuous you are.”

The genius of Mondaugen’s Law is its understanding that the unmeasurable moral aspects of life are as subject to necessity as are the measurable physical ones . . . You cannot reduce your engagement with the past and future without diminishing yourself, without becoming “more tenuous.”

The term “personal density” brings me back, yet again, to an observation the playwright Richard Foreman made, just before the arrival of the smartphone: “I see within us all (myself included) the replacement of complex inner density with a new kind of self — evolving under the pressure of information overload and the technology of the ‘instantly available.'”

The intensification of communication, and the attendant flow of information, aids in the development of personal density, of inner density, but only up to a point. Then the effect reverses. One is so overwhelmed by the necessity of communication — a necessity that may well be felt as a form of pleasure — that there is no longer any time for the synthesis or consolidation needed to build density. Little adheres, less coheres. Personal density at this point becomes inversely proportional to informational density. The only way to deal with the expansion of informational bandwidth is to constrict one’s temporal bandwidth — to narrow the “Now.” We are not unbounded; tradeoffs must be made.

Image: Andrew Gustar.

Just Google it

From Michael S. Evans’s review of Pedro Domingos’s  The Master Algorithm:

For those in power, machine learning offers all of the benefits of human knowledge without the attendant dangers of organized human resistance. The Master Algorithm describes a world in which individuals can be managed faster than they can respond. What does this future really look like? It looks like the world we already have, but without recourse.

Framed and shot

arbus2

“From the very beginning of her career,” writes Arthur Lubow, of Diane Arbus, “she was taking photographs to obtain a vital proof — a corroboration of her own existence. The pattern was set early. When she was 15, she described to a friend how she would undress at night in her lit bathroom and watch an old man across the courtyard watch her (until his wife complained). She not only wanted to see, she needed to be seen.”

The bathroom was a camera, in which the girl composed an image of herself, fixed in light, for the audience, the other, to see. But was it really a corroboration of her existence that she sought, or its annihilation? Existence is continuous, unrelenting. The fixed image, the discrete image, offers an escape from the flux. The camera never lies, but the truth it tells is not of this world.

A social medium is, in a sense, a camera, a room in which we compose ourselves for the other’s viewing. The stream may feel unrelenting as it pours through the phone, but it’s not continuous. It’s a series of fixed and discrete images, delivered visually or textually. It’s a film.

“She not only wanted to see, she needed to be seen.”

Social media, writes Rob Horning, can “facilitate an escapism through engagement.” He argues, drawing on Roy Baumeister’s 1988 Journal of Sex Research paper “Masochism as Escape from Self,” that social-media use can be considered a form of masochism. The self-construction that takes place on a network like Facebook or Snapchat is a mask for self-destruction. “It might seem weird to say that we express ourselves to escape ourselves. But self-expression can dissolve the self as well as build some enduring, legible version of a self.” And: “The platform’s constrictions take on the function of bondage, restricting autonomy to a limited set of actions.”

Arbus understood the paradox of overexposure — how, when carried to an extreme, exposure begins to erase the self. In the confines of a camera, light dissolves individuality; it’s disfiguring.

Image: Kris Haamer.

The future of Facebook is more bias, not less

grain

“What makes social media unique,” writes Mark Zuckerberg in defending Facebook against charges of an anti-conservative slant in its promotion of “trending” news stories, is that “we are one global community where anyone can share anything — from a loving photo of a mother and her baby to intellectual analysis of political events.” The ideal of a global community of unfettered sharers, all equal in their sharing ability, is “the core of everything Facebook is,” he continues. “Every tool we build is designed to give more people a voice and bring our global community together.”

What doesn’t cross Zuckerberg’s mind is that he is here expressing his own ideological bias, a bias toward a kind of My Little Pony cosmopolitanism that is at once soggy-minded and imperialist. It is a bias so thoroughgoing that he is unable to conceive of it as being a bias. Surely, no one could look at the pursuit of a global community, organized under the auspices of a business that seeks complete control over people’s attention, as anything other than an unalloyed good. Kumbaya, bitch.

While Facebook continues to deny any systematic skewing of its news highlights, it does acknowledge “the possibility of isolated improper actions or unintentional bias.” It places the blame squarely on humans, those notoriously flawed beings whom the company stresses it is striving to eliminate from its information-filtering processes. “We currently use people to bridge the gap between what an algorithm can do today and what we hope it will be able to do in the future,” Facebook’s top lawyer, Colin Stretch, explains in a letter to Congress. Stretch doesn’t bother to mention that an algorithm is itself a product of human effort and judgment, but one senses that the company is probably hard at work at developing an algorithm to write its headline-filtering algorithm and after that it will seek to develop an algorithm to write the algorithm that writes the headline-filtering algorithm. Facebook won’t rest until it’s algorithms all the way down.

In the meantime, the company is making itself more insular to protect it’s algorithmic virtue. “We will eliminate our reliance on external websites and news outlets to identify, validate, or assess the importance of trending topics,” writes Stretch. Potential “trending topics” will be identified solely through a software program monitoring activity on Facebook. The problem with the news outlets is that they still occasionally use humans to make editorial judgments and hence can’t be trusted to be bias-free. Facebook wants to insulate itself from journalism even as it seeks to dominate journalism.

Still, it’s hard not to feel a little sympathy for Facebook in its current predicament. The reason it had to bring in humans to sift through news stories in the first place was that its trend-tracking algorithm was overly reliant on — you guessed it — human judgment. The algorithm aggregated the judgments of Facebook members, as expressed through Likes, repostings, and other subjective actions, and that led to an abundance of crap in the trending feed. As The Guardian‘s Nellie Bowles put it, “Truly viral news content tends to be terrible.” The wisdom of the crowd, when it comes to picking news stories for wide circulation, is indistinguishable from idiocy. So Facebook needed to bring in (individual) human judgment to correct for the flaws in (mass) human judgment.

Humans: can’t live with ’em, can’t live without ’em.

I’m guessing that at this point Zuckerberg rues the day he gave a thumb’s-up to the Trending Topics section. Facebook’s News Feed, which is by far the social network’s most important and influential information feed, is infinitely more biased than the Trending Topics Feed, but in the News Feed “bias” goes by the user-friendly name “personalization” and so draws little ire. People are happy to have their own bias fed back to them. It’s when they see things that don’t fit their bias that they start getting irritated and complaining about “bias.”

Facebook’s mistake was to attempt to create a universal, one-feed-fits-all headline service. The company put itself in a no-win situation. Even if it were possible to create a purely unbiased news feed, a lot of people would still perceive bias in it. And most people don’t want an unbiased news feed, anyway — they just want to be able to choose their own bias. So here, if you’ll allow me to exercise my own jaundiced bias, is what I bet will happen. Once all the fuss dies down, the Trending Topics section, in its current universal form, will quietly be eliminated. In its place, Facebook will start offering a variety of news “channels” that will be curated, for a fee or an ad-revenue split, by media outlets like Fox News, or Politico, or Brietbart, or Huffington Post, or Vice, or Funny or Die, or what have you. Facebook members will be free to choose whichever channel or channels they want to follow — they’ll be able to choose their own bias, in other words — and Facebook will tighten its grip over news distribution while also getting a new revenue stream. Now that’s a win-win.

The best way to bring a global community together is by letting its members indulge their own biases. Just make sure you call it “personalization.”

Image: Keith.

The green light

Gatsby’s real name, you’ll recall, was Gatz, so I guess it’s no surprise that The Great Gatsby is Bill Gates’s favorite novel:

The novel that I reread the most. Melinda and I love one line so much that we had it painted on a wall in our house: “His dream must have seemed so close that he could hardly fail to grasp it.”

Is it there as a warning, I wonder, or an inspiration?