Category Archives: Uncategorized

Technological unemployment, then and now

robot2

In “Promise and Peril of Automation,” an article in the New York Times, David Morse writes:

The key area of social change stimulated by automation is employment. Everywhere one finds two things: a positive emphasis on opportunity and a keen sensitivity to change, often translated more concretely into fear.

The emphasis on opportunity is welcome and indicative of the climate in which automation will come to maturity. The sensitivity to change is equally significant. If fears about the future, especially job worries, are dismissed as “unreal” or “unimportant,” human resistance to change will be a major impediment to deriving full social benefit from automation.

What is the basis for these fears? Partly, a natural human uneasiness in the face of the unknown. Partly, the fact that few things are more serious to a worker than unemployment. Partly, too, the fear that automation undercuts the whole employment structure on which society as we know it is based. If, for example, automation cuts direct labor, often by 50 percent or more, and if this goes on from one industry to another, what happens? Even with shorter hours and new opportunities, will not a saturation point be reached, with old jobs disappearing faster than new ones are created, and unemployment on a wide scale raising its ugly head and creeping from one undertaking and industry to another?

Morse’s article was published on June 9, 1957.

Today, nearly sixty years later, the Times is running a new article on the specter of technological unemployment, by Eduardo Porter. He writes:

[Lawrence Summers] reminisced about his undergraduate days at M.I.T. in the 1970s, when the debate over the idea of technological unemployment pitted “smart people,” exemplified by the great economist Robert Solow, and “stupid people,” “exemplified by a bunch of sociologists.”

It was stupid to think technological progress would reduce employment. If technology increased productivity — allowing companies and their workers to make more stuff in less time — people would have more money to spend on more things that would have to be made, creating jobs for other people.

But at some point Mr. Summers experienced an epiphany. “It sort of occurred to me,” he said. “Suppose the stupid people were right. What would it look like?” And what it looked like fits pretty well with what the world looks like today.

The fears about automation’s job-killing potential that erupted in the 1950s didn’t pan out. That’s one reason why smart economists — no, it’s not an oxymoron — became so convinced that technological unemployment, as a broad rather than a local phenomenon, was mythical. But yesterday’s automation is not today’s automation. What if a new wave of computer-generated automation, rather than putting more money into the hands of masses of consumers, ended up concentrating that wealth, in the form of greater profits, into the hands of a rather small group of plutocrats who owned and controlled the means of automation? And what if automation’s reach extended so far into the human skill set that the range of jobs immune to automation was no longer sufficient to absorb displaced workers? There may not be a “lump of labor,” but we may discover that there is a “lump of skills.”

Henry Ford increased the hourly wage of workers beyond what was economically necessary because he knew that the workers would use the money to buy Ford cars. He saw that he had an interest in broadening prosperity. It seems telling that it has now become popular among the Silicon Valley elite to argue that the government should step in and start paying people a universal basic income. With a universal basic income, even the unemployed would still be able to afford their smartphone data plans.

Image: detail of “Friend or Foe?” by Leslie Illingworth.

The width of now

micrometer

“Human character changed on or about December 2010,” writes Edward Mendelson in “In the Depths of the Digital Age,” when “everyone, it seemed, started carrying a smartphone.”

For the first time, practically anyone could be found and intruded upon, not only at some fixed address at home or at work, but everywhere and at all times. Before this, everyone could expect, in the ordinary course of the day, some time at least in which to be left alone, unobserved, unsustained and unburdened by public or familial roles. That era now came to an end.

The self exploded as the social world imploded. The fuse had been burning for a long time, of course.

Mendelson continues:

In Thomas Pynchon’s Gravity’s Rainbow (1973), an engineer named Kurt Mondaugen enunciates a law of human existence: “Personal density … is directly proportional to temporal bandwidth.” The narrator explains: “’Temporal bandwidth’ is the width of your present, your now. … The more you dwell in the past and future, the thicker your bandwidth, the more solid your persona. But the narrower your sense of Now, the more tenuous you are.”

The genius of Mondaugen’s Law is its understanding that the unmeasurable moral aspects of life are as subject to necessity as are the measurable physical ones . . . You cannot reduce your engagement with the past and future without diminishing yourself, without becoming “more tenuous.”

The term “personal density” brings me back, yet again, to an observation the playwright Richard Foreman made, just before the arrival of the smartphone: “I see within us all (myself included) the replacement of complex inner density with a new kind of self — evolving under the pressure of information overload and the technology of the ‘instantly available.'”

The intensification of communication, and the attendant flow of information, aids in the development of personal density, of inner density, but only up to a point. Then the effect reverses. One is so overwhelmed by the necessity of communication — a necessity that may well be felt as a form of pleasure — that there is no longer any time for the synthesis or consolidation needed to build density. Little adheres, less coheres. Personal density at this point becomes inversely proportional to informational density. The only way to deal with the expansion of informational bandwidth is to constrict one’s temporal bandwidth — to narrow the “Now.” We are not unbounded; tradeoffs must be made.

Image: Andrew Gustar.

Just Google it

From Michael S. Evans’s review of Pedro Domingos’s  The Master Algorithm:

For those in power, machine learning offers all of the benefits of human knowledge without the attendant dangers of organized human resistance. The Master Algorithm describes a world in which individuals can be managed faster than they can respond. What does this future really look like? It looks like the world we already have, but without recourse.

The future of Facebook is more bias, not less

grain

“What makes social media unique,” writes Mark Zuckerberg in defending Facebook against charges of an anti-conservative slant in its promotion of “trending” news stories, is that “we are one global community where anyone can share anything — from a loving photo of a mother and her baby to intellectual analysis of political events.” The ideal of a global community of unfettered sharers, all equal in their sharing ability, is “the core of everything Facebook is,” he continues. “Every tool we build is designed to give more people a voice and bring our global community together.”

What doesn’t cross Zuckerberg’s mind is that he is here expressing his own ideological bias, a bias toward a kind of My Little Pony cosmopolitanism that is at once soggy-minded and imperialist. It is a bias so thoroughgoing that he is unable to conceive of it as being a bias. Surely, no one could look at the pursuit of a global community, organized under the auspices of a business that seeks complete control over people’s attention, as anything other than an unalloyed good. Kumbaya, bitch.

While Facebook continues to deny any systematic skewing of its news highlights, it does acknowledge “the possibility of isolated improper actions or unintentional bias.” It places the blame squarely on humans, those notoriously flawed beings whom the company stresses it is striving to eliminate from its information-filtering processes. “We currently use people to bridge the gap between what an algorithm can do today and what we hope it will be able to do in the future,” Facebook’s top lawyer, Colin Stretch, explains in a letter to Congress. Stretch doesn’t bother to mention that an algorithm is itself a product of human effort and judgment, but one senses that the company is probably hard at work at developing an algorithm to write its headline-filtering algorithm and after that it will seek to develop an algorithm to write the algorithm that writes the headline-filtering algorithm. Facebook won’t rest until it’s algorithms all the way down.

In the meantime, the company is making itself more insular to protect it’s algorithmic virtue. “We will eliminate our reliance on external websites and news outlets to identify, validate, or assess the importance of trending topics,” writes Stretch. Potential “trending topics” will be identified solely through a software program monitoring activity on Facebook. The problem with the news outlets is that they still occasionally use humans to make editorial judgments and hence can’t be trusted to be bias-free. Facebook wants to insulate itself from journalism even as it seeks to dominate journalism.

Still, it’s hard not to feel a little sympathy for Facebook in its current predicament. The reason it had to bring in humans to sift through news stories in the first place was that its trend-tracking algorithm was overly reliant on — you guessed it — human judgment. The algorithm aggregated the judgments of Facebook members, as expressed through Likes, repostings, and other subjective actions, and that led to an abundance of crap in the trending feed. As The Guardian‘s Nellie Bowles put it, “Truly viral news content tends to be terrible.” The wisdom of the crowd, when it comes to picking news stories for wide circulation, is indistinguishable from idiocy. So Facebook needed to bring in (individual) human judgment to correct for the flaws in (mass) human judgment.

Humans: can’t live with ’em, can’t live without ’em.

I’m guessing that at this point Zuckerberg rues the day he gave a thumb’s-up to the Trending Topics section. Facebook’s News Feed, which is by far the social network’s most important and influential information feed, is infinitely more biased than the Trending Topics Feed, but in the News Feed “bias” goes by the user-friendly name “personalization” and so draws little ire. People are happy to have their own bias fed back to them. It’s when they see things that don’t fit their bias that they start getting irritated and complaining about “bias.”

Facebook’s mistake was to attempt to create a universal, one-feed-fits-all headline service. The company put itself in a no-win situation. Even if it were possible to create a purely unbiased news feed, a lot of people would still perceive bias in it. And most people don’t want an unbiased news feed, anyway — they just want to be able to choose their own bias. So here, if you’ll allow me to exercise my own jaundiced bias, is what I bet will happen. Once all the fuss dies down, the Trending Topics section, in its current universal form, will quietly be eliminated. In its place, Facebook will start offering a variety of news “channels” that will be curated, for a fee or an ad-revenue split, by media outlets like Fox News, or Politico, or Brietbart, or Huffington Post, or Vice, or Funny or Die, or what have you. Facebook members will be free to choose whichever channel or channels they want to follow — they’ll be able to choose their own bias, in other words — and Facebook will tighten its grip over news distribution while also getting a new revenue stream. Now that’s a win-win.

The best way to bring a global community together is by letting its members indulge their own biases. Just make sure you call it “personalization.”

Image: Keith.

The green light

Gatsby’s real name, you’ll recall, was Gatz, so I guess it’s no surprise that The Great Gatsby is Bill Gates’s favorite novel:

The novel that I reread the most. Melinda and I love one line so much that we had it painted on a wall in our house: “His dream must have seemed so close that he could hardly fail to grasp it.”

Is it there as a warning, I wonder, or an inspiration?

Are maps necessary?

If you own a smartphone, you have a detailed, up-to-date atlas on your person at all times. This is something new in the world. As the cartographer Justin O’Beirne wrote last year:

An unprecedented level of detail is now available to the average person, for little or no cost. The same [digital] map literally shows every human settlement in the world at every scale, from the world’s largest cities to its tiniest neighborhoods and hamlets. Every country. Every city. Every road. All mapped in exquisite detail.

It would seem to be the golden age of maps and map-reading. And yet, even as the map is becoming omnipresent, the map is fading in importance. If your phone will give you detailed directions whenever you need them, telling you where and when to turn, or your car or other vehicle will get you where you want to go automatically on command, then there’s no need to consult a map to figure out where you are or where you’re going. If a machine can read a map, a person doesn’t have to. The map is subsumed by the app or the vehicle (or even the shoe).

But let’s back up, for a broader view. O’Beirne pointed out in his post that we are well on our way to having a “universal map.” Not only will everyone have a detailed map readily available at all times, but that map will be identical to everyone else’s map. If there’s one free, detailed, always-available map of the world in existence, you don’t need any others. In fact, maintaining others would be redundant, a waste of labor. Google Maps already has well over a billion users, and that number gets bigger all the time. As O’Beirne wrote, “As smartphone usage continues to explode, how long will it be until the majority of the world is using the same map? And what are the implications of this?”

Indeed.

Now, in a new and illuminating post, “What Happened to Google Maps?,” O’Beirne offers a thoroughgoing assessment of what our universal map is coming to look like. He examines how Google Maps has changed over the last few years, with a particular focus on its varying levels of resolution. What he discovers is that, as a cartographic tool, Google Maps has gone to hell. Detail has been lost and, along with it, context. (Detail reappears as you zoom way in, but by then the larger context, and the sense of place that goes with it, has been sacrificed.) If you want to use a Google Map in a traditional way, as a means, say, to plot a course between a couple of cities a hundred miles apart, you’re going to be frustrated. O’Beirne provides an example of how Google Maps’ display of New York City and its environs changed between 2010 and 2016:

nyc map

Not only have most of the cities disappeared (Stamford and Princeton remain, curiously, but the larger Newark and Bridgeport are gone), but the roads have at once multiplied and turned into a confusing jumble. Look at the display of Long Island roads, for instance. Relatively minor connecting highways have been given the same visual weight as major highways. A label for Route 495 has been added, but it just floats over a welter of equally sized roads. Comments O’Beirne: “In 2010, there were plenty of roads in the area, but you could at least follow each one individually. In 2016, however, the area has become a mess. With so many roads so close, they all bleed together, and it’s difficult to trace the path of any single road with your eyes.” By any standard of cartographic design, Google Maps in its current incarnation is a disaster.

In another example, O’Beirne contrasts how an old paper map displays the Chicago area . . .

chicagopaper

. . . with how that same area appears now in Google Maps:

chicagogoogle

The Google Map is, arguably, more pleasing to look at than the paper map, but in design terms the Google Map is far less efficient. Essential details have been erased, while road clutter has been magnified. As a tool for navigating the area, the Google Map is pretty much useless. And the Google Map, let’s remember, is becoming our universal map.

O’Beirne is a bit mystified by the changes Google has wrought. He suspects that they were inspired by a decision to optimize Google Maps for smartphone displays. “Unfortunately,” he writes, “these ‘optimizations’ only served to exacerbate the longstanding imbalances [between levels of detail] already in the maps. As is often the case with cartography: less isn’t more. Less is just less. And that’s certainly the case here.” I’m sure that’s true. Adapting to “mobile” is the bane of the modern interface designer. (And let’s not overlook the fact that the “cleaner” Google Map provides a lot of open space for future ad placements.)

Google, though, is adept at tailoring interfaces to devices. Yet the new map design appears on big computer screens as well as tiny phone screens. That suggests that there’s something more profound going on than just the need to squeeze a map onto a small device. Implicit in the Google changes is the obsolescence of the map as a navigational tool. Turn-by-turn directions and automated route selection mean that fewer and fewer people ever have to figure out how to get from one place to another or even to know where they are. As a navigation aid, the map is becoming a vestigial organ. So why not get rid of the useful details and start to think of the map as merely a picture or an image, or a canvas for advertisements?

We’re in a moment of transition, as the automation of navigation shifts responsibility for map-reading from man to machine. It’s a great irony: The universal map arrives at the very moment that we no longer need it.

Photo: teddy-rised.

The enigma of the robot-batted shuttlecock

shuttlecock

From “Robots Must Do More Than Just Playing Sports,” an article in today’s China Daily:

Premier Li Keqiang visited a town in Chengdu, capital of Southwest China’s Sichuan province, on Monday, during which he played badminton with a robot.

Yang Feng, an associate professor on automation from Northwestern Polytechnical University, commented: “In order to play badminton, a droid needs high-accuracy vision and image processing, as well as precise motion control. It has to recognize the shuttlecock in flight and calculate its trajectory and then anticipate where it can hit the shuttlecock.”

Did you know that the word “shuttlecock” was coined 500 years ago? It’s a hell of a sturdy word, and one that I try to use in conversation every day.

The anonymous journalist who wrote the China Daily story was grudging in his praise of the badminton-playing robot:

Early in 2011, Zhejiang University developed Wu and Kong, two special sporting droids, which could play table tennis with each other and with human players. In that sport, the robots need to recognize the ball more precisely than in playing badminton. Instead of a technological breakthrough, the droid that plays badminton in Chengdu can be better called a good, practical model that uses these technologies.

“A good, practical model”? For what, exactly?

The headline “Robots Must Do More Than Just Playing Sports,” while wonderful, is mysterious. The article, as Eamonn Fitzgerald observes, “contains nothing to support the demand asserted in the headline.”

I find a clue to the mystery in a new piece on the ongoing productivity paradox, this one appearing in today’s Times. Despite all the excitement about how super-efficient robots and software are displacing lazy humans from jobs, labor productivity remains in the doldrums:

The number of hours Americans worked rose 1.9 percent in the year ended in March. New data released Thursday showed that gross domestic product in the first quarter was up 1.9 percent over the previous year. Despite constant advances in software, equipment and management practices to try to make corporate America more efficient, actual economic output is merely moving in lock step with the number of hours people put in, rather than rising as it has throughout modern history.

We could chalk that up to a statistical blip if it were a single year; productivity data are notoriously volatile. But this has been going on for some time.

If computers are going to take over jobs on a massive scale, then labor productivity — output per human worker — is going to go way up. Way, way up. But, despite years of heavy investment in automation and years of rapid advances in information technology, we have seen no sign of that happening. Productivity is moribund. Productivity measures are notoriously fuzzy, and some economists speculate that computer-inspired productivity gains are not being captured by traditional economic measures. There’s something to that idea but, at least when it comes to the labor market, probably not all that much. The mismeasurement hypothesis has been debunked, or at least tempered, by studies like this one and this one. If computers and robots are taking over the labor market, we’re going to see it in the labor productivity statistics. And we’re not. Computers are changing jobs in deep ways, but they’re not rendering the human worker obsolete — and in some cases, as we’ve seen in the past, software may actually dampen productivity by distracting workers or encouraging them to spend more time on trivial tasks.

What we may be seeing is what I’ll term the Shuttlecock Paradox. Robots are capable of doing amazing things — playing badminton with the premier, for instance — but the amazingness is often thin and brittle. Robots may soon be able to beat the best badminton players in the world, but that’s not going to put professional badminton players out of work. Because it’s still a lot more fun to watch people play badminton than to watch robots play badminton. Remember how automatic teller machines were going to put bank tellers out of work? And yet, even though ATMs are everywhere, there are more bank tellers at work today than when ATMs were invented.

What we may be mismeasuring is the gap between robot performance and human performance — and the fact that a whole lot of jobs, old ones and new ones, good ones and drab ones, may fit in that gap. “Robots Must Do More Than Just Playing Sports”: It’s a gnomic headline, to be sure, but I sense profundities in it.

Photo: Judit Klein.