The green light

Gatsby’s real name, you’ll recall, was Gatz, so I guess it’s no surprise that The Great Gatsby is Bill Gates’s favorite novel:

The novel that I reread the most. Melinda and I love one line so much that we had it painted on a wall in our house: “His dream must have seemed so close that he could hardly fail to grasp it.”

Is it there as a warning, I wonder, or an inspiration?

Are maps necessary?

If you own a smartphone, you have a detailed, up-to-date atlas on your person at all times. This is something new in the world. As the cartographer Justin O’Beirne wrote last year:

An unprecedented level of detail is now available to the average person, for little or no cost. The same [digital] map literally shows every human settlement in the world at every scale, from the world’s largest cities to its tiniest neighborhoods and hamlets. Every country. Every city. Every road. All mapped in exquisite detail.

It would seem to be the golden age of maps and map-reading. And yet, even as the map is becoming omnipresent, the map is fading in importance. If your phone will give you detailed directions whenever you need them, telling you where and when to turn, or your car or other vehicle will get you where you want to go automatically on command, then there’s no need to consult a map to figure out where you are or where you’re going. If a machine can read a map, a person doesn’t have to. The map is subsumed by the app or the vehicle (or even the shoe).

But let’s back up, for a broader view. O’Beirne pointed out in his post that we are well on our way to having a “universal map.” Not only will everyone have a detailed map readily available at all times, but that map will be identical to everyone else’s map. If there’s one free, detailed, always-available map of the world in existence, you don’t need any others. In fact, maintaining others would be redundant, a waste of labor. Google Maps already has well over a billion users, and that number gets bigger all the time. As O’Beirne wrote, “As smartphone usage continues to explode, how long will it be until the majority of the world is using the same map? And what are the implications of this?”

Indeed.

Now, in a new and illuminating post, “What Happened to Google Maps?,” O’Beirne offers a thoroughgoing assessment of what our universal map is coming to look like. He examines how Google Maps has changed over the last few years, with a particular focus on its varying levels of resolution. What he discovers is that, as a cartographic tool, Google Maps has gone to hell. Detail has been lost and, along with it, context. (Detail reappears as you zoom way in, but by then the larger context, and the sense of place that goes with it, has been sacrificed.) If you want to use a Google Map in a traditional way, as a means, say, to plot a course between a couple of cities a hundred miles apart, you’re going to be frustrated. O’Beirne provides an example of how Google Maps’ display of New York City and its environs changed between 2010 and 2016:

nyc map

Not only have most of the cities disappeared (Stamford and Princeton remain, curiously, but the larger Newark and Bridgeport are gone), but the roads have at once multiplied and turned into a confusing jumble. Look at the display of Long Island roads, for instance. Relatively minor connecting highways have been given the same visual weight as major highways. A label for Route 495 has been added, but it just floats over a welter of equally sized roads. Comments O’Beirne: “In 2010, there were plenty of roads in the area, but you could at least follow each one individually. In 2016, however, the area has become a mess. With so many roads so close, they all bleed together, and it’s difficult to trace the path of any single road with your eyes.” By any standard of cartographic design, Google Maps in its current incarnation is a disaster.

In another example, O’Beirne contrasts how an old paper map displays the Chicago area . . .

chicagopaper

. . . with how that same area appears now in Google Maps:

chicagogoogle

The Google Map is, arguably, more pleasing to look at than the paper map, but in design terms the Google Map is far less efficient. Essential details have been erased, while road clutter has been magnified. As a tool for navigating the area, the Google Map is pretty much useless. And the Google Map, let’s remember, is becoming our universal map.

O’Beirne is a bit mystified by the changes Google has wrought. He suspects that they were inspired by a decision to optimize Google Maps for smartphone displays. “Unfortunately,” he writes, “these ‘optimizations’ only served to exacerbate the longstanding imbalances [between levels of detail] already in the maps. As is often the case with cartography: less isn’t more. Less is just less. And that’s certainly the case here.” I’m sure that’s true. Adapting to “mobile” is the bane of the modern interface designer. (And let’s not overlook the fact that the “cleaner” Google Map provides a lot of open space for future ad placements.)

Google, though, is adept at tailoring interfaces to devices. Yet the new map design appears on big computer screens as well as tiny phone screens. That suggests that there’s something more profound going on than just the need to squeeze a map onto a small device. Implicit in the Google changes is the obsolescence of the map as a navigational tool. Turn-by-turn directions and automated route selection mean that fewer and fewer people ever have to figure out how to get from one place to another or even to know where they are. As a navigation aid, the map is becoming a vestigial organ. So why not get rid of the useful details and start to think of the map as merely a picture or an image, or a canvas for advertisements?

We’re in a moment of transition, as the automation of navigation shifts responsibility for map-reading from man to machine. It’s a great irony: The universal map arrives at the very moment that we no longer need it.

Photo: teddy-rised.

The enigma of the robot-batted shuttlecock

shuttlecock

From “Robots Must Do More Than Just Playing Sports,” an article in today’s China Daily:

Premier Li Keqiang visited a town in Chengdu, capital of Southwest China’s Sichuan province, on Monday, during which he played badminton with a robot.

Yang Feng, an associate professor on automation from Northwestern Polytechnical University, commented: “In order to play badminton, a droid needs high-accuracy vision and image processing, as well as precise motion control. It has to recognize the shuttlecock in flight and calculate its trajectory and then anticipate where it can hit the shuttlecock.”

Did you know that the word “shuttlecock” was coined 500 years ago? It’s a hell of a sturdy word, and one that I try to use in conversation every day.

The anonymous journalist who wrote the China Daily story was grudging in his praise of the badminton-playing robot:

Early in 2011, Zhejiang University developed Wu and Kong, two special sporting droids, which could play table tennis with each other and with human players. In that sport, the robots need to recognize the ball more precisely than in playing badminton. Instead of a technological breakthrough, the droid that plays badminton in Chengdu can be better called a good, practical model that uses these technologies.

“A good, practical model”? For what, exactly?

The headline “Robots Must Do More Than Just Playing Sports,” while wonderful, is mysterious. The article, as Eamonn Fitzgerald observes, “contains nothing to support the demand asserted in the headline.”

I find a clue to the mystery in a new piece on the ongoing productivity paradox, this one appearing in today’s Times. Despite all the excitement about how super-efficient robots and software are displacing lazy humans from jobs, labor productivity remains in the doldrums:

The number of hours Americans worked rose 1.9 percent in the year ended in March. New data released Thursday showed that gross domestic product in the first quarter was up 1.9 percent over the previous year. Despite constant advances in software, equipment and management practices to try to make corporate America more efficient, actual economic output is merely moving in lock step with the number of hours people put in, rather than rising as it has throughout modern history.

We could chalk that up to a statistical blip if it were a single year; productivity data are notoriously volatile. But this has been going on for some time.

If computers are going to take over jobs on a massive scale, then labor productivity — output per human worker — is going to go way up. Way, way up. But, despite years of heavy investment in automation and years of rapid advances in information technology, we have seen no sign of that happening. Productivity is moribund. Productivity measures are notoriously fuzzy, and some economists speculate that computer-inspired productivity gains are not being captured by traditional economic measures. There’s something to that idea but, at least when it comes to the labor market, probably not all that much. The mismeasurement hypothesis has been debunked, or at least tempered, by studies like this one and this one. If computers and robots are taking over the labor market, we’re going to see it in the labor productivity statistics. And we’re not. Computers are changing jobs in deep ways, but they’re not rendering the human worker obsolete — and in some cases, as we’ve seen in the past, software may actually dampen productivity by distracting workers or encouraging them to spend more time on trivial tasks.

What we may be seeing is what I’ll term the Shuttlecock Paradox. Robots are capable of doing amazing things — playing badminton with the premier, for instance — but the amazingness is often thin and brittle. Robots may soon be able to beat the best badminton players in the world, but that’s not going to put professional badminton players out of work. Because it’s still a lot more fun to watch people play badminton than to watch robots play badminton. Remember how automatic teller machines were going to put bank tellers out of work? And yet, even though ATMs are everywhere, there are more bank tellers at work today than when ATMs were invented.

What we may be mismeasuring is the gap between robot performance and human performance — and the fact that a whole lot of jobs, old ones and new ones, good ones and drab ones, may fit in that gap. “Robots Must Do More Than Just Playing Sports”: It’s a gnomic headline, to be sure, but I sense profundities in it.

Photo: Judit Klein.

147 easy pieces

The advance review copies have arrived:

arcs

Seventy-nine of the best posts from a decade of Rough Type.

Sixteen collected articles and reviews.

Fifty tweetforms.

Two new essays: “Silicon Valley Days” and “The Daedalus Mission.”

Emojis out the wazoo.

Utopia is creepy.

The leap

mindinterface

Reading that new Playboy interview with Ray Kurzweil sent me back to the notorious interview Playboy did with Larry Page and Sergey Brin in 2004. (The interview nearly torpedoed Google’s IPO, you’ll recall.) Page and Brin’s anticipation of a grand computer-human mind-meld neatly prefigures Kurzweil’s speculations:

PAGE: The more information you have, the better.

PLAYBOY: Yet more isn’t necessarily better.

BRIN: Exactly. This is why it’s a complex problem we’re solving. You want access to as much as possible so you can discern what is most relevant and correct. The solution isn’t to limit the information you receive. Ultimately you want to have the entire world’s knowledge connected directly to your mind.

PLAYBOY: Is that what we have to look forward to?

BRIN: Well, maybe. I hope so. At least a version of that. We probably won’t be looking up everything on a computer.

PLAYBOY: Is your goal to have the entire world’s knowledge connected directly to our minds?

BRIN: To get closer to that — as close as possible.

PLAYBOY: At some point doesn’t the volume become overwhelming?

BRIN: Your mind is tremendously efficient at weighing an enormous amount of information. We want to make smarter search engines that do a lot of the work for us. The smarter we can make the search engine, the better. Where will it lead? Who knows? But it’s credible to imagine a leap as great as that from hunting through library stacks to a Google session, when we leap from today’s search engines to having the entirety of the world’s information as just one of our thoughts.

What’s telling is that twelve years have gone by — twelve years of enormous advances in digital computing and networking — and yet, at a practical level, we’re no closer to hooking computers and brains together in the way Page, Brin, and Kurzweil imagine. In fact, we have no clue as to how you’d even go about such a project. As Kevin Kelly once observed, “the singularity is always near.” And that is where it is likely to remain.

Gigantic, a big big brain

starlustTen years ago, Larry Page and Sergey Brin couldn’t stop talking about their excitement at the prospect of extending or replacing the human brain with computers. For the last several years, they’ve been much quieter about their mind-disruption project. I sense that a soldier in Google’s flack army warned them that in voicing their fantasies they risked weirding people out. Renovating the species is a job best done on the sly.

Still, we’ll always have Ray Kurzweil. (I mean that literally.) When, in 2012, Google hired the inventor and immortalist as a director of engineering, it also gained a new mouthpiece for its boldest ambitions. Playboy has just published a wide-ranging interview with Kurzweil in which he discusses everything from his hobbies (“I like to take naps”) to his anxieties (“unstructured social situations make me nervous”). The big thrust, though, is the imminent upgrading of homo sapiens:

By the 2030s we will have nanobots that can go into a brain non-invasively through the capillaries, connect to our neocortex and basically connect it to a synthetic neocortex that works the same way in the cloud. So we’ll have an additional neocortex, just like we developed an additional neocortex 2 million years ago, and we’ll use it just as we used the frontal cortex: to add additional levels of abstraction. We’ll create more profound forms of communication than we’re familiar with today, more profound music and funnier jokes. We’ll be funnier. We’ll be sexier. We’ll be more adept at expressing loving sentiments.

He brings the discussion down to earth with an example:

Let’s say I’m walking along and I see my boss at Google, Larry Page, approaching. I have three seconds to come up with something clever to say, and the 300 million modules in my neocortex won’t cut it. I need a billion modules for two seconds. I’ll be able to access that in the cloud just as we can access additional computation in the cloud for our mobile phones, and I’ll be able to say exactly the right thing.

I think there’s a flaw in Kurzweil’s logic here. He fails to anticipate the inevitable arm’s race in cleverness. Larry Page is going to be plugged into that enormous cloud neocortex, too, so surely Page’s standards for what qualifies as a clever remark will have gone up exponentially. Even with his new brain, Kurzweil will still be in exactly the same boat, floundering to muster the wit necessary to impress the boss. Funny and sexy are relative terms.

As to the expression of loving sentiments, the interview goes deep into Kurzweil’s views on the future of copulation, which would appear to be indistinguishable from the future of onanism. I’ll spare you the details, but when the interviewer, David Hochman, asks Kurzweil whether there’s “anyone whose body you would like to inhabit” in order to have sex in virtual reality, Kurzweil replies, “Probably some attractive woman. If I had to pick one? Amy Adams. I like the perky way she uses her body.”

Time for a nap, Ray.

Image: Ramona.Forcella

Context collapse and context restoration

wall

Facebook has a problem. Its members aren’t sharing as much as they used to. At least they’re not sharing firsthand the way they used to. Instead of posting notices about what they’re doing or thinking, or where they are, or whom they’re hanging out with, they’re just recycling secondhand stuff — news stories, songs, other people’s photos or tweets, YouTube videos, etc. The nature of what they share on the network is changing from the personal to the impersonal, from the informal to the formal, from the subjective to the objective. To put it into media terms, which would seem to be the appropriate terms, they are shifting their role from that of actor to that of producer or publisher or aggregator.

To the extent that people still post reports on their firsthand experiences, they’re tending to use more selective networks, like Snapchat, that offer more precise audience control. People are retreating from public displays of experience to more private displays. They’re shifting from mass media to narrower media that, in their intimacy, more closely resemble traditional social settings.

Because Facebook feeds on personal sharing the way a vampire feeds on blood — the more intimate the information you publish, the more Facebook knows about you, and the more precisely it can tailor ads and other messages — any decline in personal sharing is ominous for the company. It’s no surprise that Facebook is now trying to figure out some interface tweaks and tricks that will, as a company spokesperson puts it, “make sharing on Facebook more fun and dynamic.” It’s hard not to hear a hint of desperation in that statement.

Facebook employees, according to a Bloomberg story, refer to the curtailment of personal sharing as “context collapse.” But that’s completely wrong. Context collapse is a sociological term of art that describes the way social media tend to erase the boundaries that once defined people’s social lives. Before social media came along, your social life played out in different and largely separate spheres. You had your friends in one sphere, your family members in another sphere, your coworkers in still another sphere, and so on. The spheres overlapped, but they remained distinct. The self you presented to your family was not the same self you presented to your friends, and the self you presented to your friends was not the one you presented to the people you worked with or went to school with. With a social network like Facebook, all these spheres merge into a single sphere. Everybody sees what you’re doing. Context collapses.

When Mark Zuckerberg infamously said, “You have one identity; the days of you having a different image for your work friends or your co-workers and for the people you know are probably coming to an end pretty quickly,” he was celebrating context collapse. Context collapse is a wonderful thing for a company like Facebook because a uniform self, a self without context, is easy to package as a commodity. The protean self is a fly in the Facebook ointment.

Facebook’s problem now is not context collapse but its opposite: context restoration. When people start backing away from broadcasting intimate details about themselves, it’s a sign that they’re looking to reestablish some boundaries in their social lives, to mend the walls that social media has broken. It’s an acknowledgment that the collapse of multiple social contexts into a single one-size-fits-all context circumscribes a person’s freedom. There’s only so much fun you can have if you know that your mom, your boss, and your weird neighbor are all watching. The protean self, we’re rediscovering, is a more comfortable self than the uniform self. Being forced into “one identity” is a drag.

There’s something else going on here, too. We’re learning how difficult and exhausting it is to sustain a mass-media presence. The problem with broadcasting everyday experience is that everyday experience is inevitably repetitive, and repetitiveness is, in a media context, the kiss of death. To remain interesting when viewed at a distance, when viewed through media, a person has to display continuing novelty — novelty of experience, novelty of thought. Very few of us can do that for very long. I imagine that, on Facebook, even Oscar Wilde and Dorothy Parker would have worn out their welcomes after a while.

The repetitiveness of our lives remains interesting to our family and our close friends, but outside that intimate context it gets boring. As reality TV stars, we all face declining ratings and, in the end, cancellation.

Photo: Justin Pickard.