I have seen the future of music, and its name is ThinkEar.
A new audio gadget from, oddly enough, a Finnish oil company named Neste, ThinkEar is a set of “mind-controlled earphones” that will allow your brain to choose the songs you listen to without any input from your thumbs or other body parts. Let’s go to the press release:
The world is poised on the brink of a technological revolution; rapid progress in brain mapping technology means that the ability to control devices with our minds is no longer the stuff of science fiction. Neste’s ThinkEar earphones are a bold entertainment concept that offers thought-controlled personal audio.
If I had listened to Gary Numan instead of Gang of Four when I was growing up, I would have seen all this shit coming. I mean, the guy was already using an Amazon Echo in 1979:
Back to the press release:
Making full use of the latest developments in brain wearables, the earphone’s integrated 5 point EEG sensors are able to read your brainwaves while an integrated microcomputer translates them into interaction commands to navigate your audio content.
You know who had the nicest brain wearables? The Borg.
OK, so here’s where the press release reaches its climax:
Unlike other systems, the earphones are not tethered to any external device. [They] access your favorite cloud services directly.
Which means, of course, that the cloud services will also be able to access your brainwaves directly. (Interaction is not a one-way street.) And that’s where things get really cool — you might even say numanesque. Remember when I last wrote about the future of pop? It was a year ago when Google announced the shift of its Google Play Music service from the old paradigm of listener-selected music to the new paradigm of outsourced “activity-based” music. As Google explained:
At any moment in your day, Google Play Music has whatever you need music for — from working, to working out, to working it on the dance floor — and gives you curated radio stations to make whatever you’re doing better. Our team of music experts … crafts each station song by song so you don’t have to.
ThinkEar is the missing link in mind-free listening. With your ThinkEar EEG sensors in place, Google will be able to read your brainwaves, on a moment by moment basis, and serve up an engineered set of tunes perfectly geared to your mental state as well as your activity mode. Not only will you save enormous amounts of time that you would have wasted figuring out what songs you felt like listening to, but Google will be able to use its expertly crafted soundscapes to help keep your mental state within some optimal parameters.
Far-fetched? I don’t think so. It’s basically just Shazam in reverse. The music susses you.
The applications go well beyond music. Cloud services could, for instance, beam timely notifications or warnings to your ears based on what’s going on in your brain, either at the subconscious or the conscious level. Think of what Facebook could do with that kind of capability. And if Amazon melded ThinkEar with both Echo and Audible, it could automatically intervene in your thought processes by reading you inspiring passages from pertinent books, like, say, The Fountainhead.
Maybe it’s not so odd that an oil company would invent a set of mind-reading earbuds. Once the land is tapped out, the extraction industries are going to need a new target, and what could possibly be more lucrative than fracking the human brain?
Time magazine’s Rana Foroohar says my new book, Utopia Is Creepy, “punches a hole in Silicon Valley cultural hubris.” The book comes out on September 6, the day after Labor Day, but you can read an excerpt from the introduction at Aeon today.
“Computing is not about computers any more,” wrote Nicholas Negroponte of the Massachusetts Institute of Technology in his 1995 bestseller Being Digital. “It is about living.” By the turn of the century, Silicon Valley was selling more than gadgets and software: it was selling an ideology. The creed was set in the tradition of U.S. techno-utopianism, but with a digital twist. The Valley-ites were fierce materialists – what couldn’t be measured had no meaning – yet they loathed materiality. In their view, the problems of the world, from inefficiency and inequality to morbidity and mortality, emanated from the world’s physicality, from its embodiment in torpid, inflexible, decaying stuff. The panacea was virtuality – the reinvention and redemption of society in computer code. They would build us a new Eden not from atoms but from bits. All that is solid would melt into their network. We were expected to be grateful and, for the most part, we were.
Our craving for regeneration through virtuality is the latest expression of what Susan Sontag in On Photography described as “the American impatience with reality, the taste for activities whose instrumentality is a machine.” What we’ve always found hard to abide is that the world follows a script we didn’t write. We look to technology not only to manipulate nature but to possess it, to package it as a product that can be consumed by pressing a light switch or a gas pedal or a shutter button. We yearn to reprogram existence, and with the computer we have the best means yet. We would like to see this project as heroic, as a rebellion against the tyranny of an alien power. But it’s not that at all. It’s a project born of anxiety. Behind it lies a dread that the messy, atomic world will rebel against us. What Silicon Valley sells and we buy is not transcendence but withdrawal. The screen provides a refuge, a mediated world that is more predictable, more tractable, and above all safer than the recalcitrant world of things. We flock to the virtual because the real demands too much of us.
“When a man is reduced to such a pass as playing cards by himself, he had better give up — or take to reading.” –Rawdon Crawley, The Card Player’s Manual, 1876
Big news out of the Googleplex today: the internet giant is offering a free solitaire game through its search engine and its mobile app. “When you search for ‘solitaire’ on Google,” goes the announcement on the company’s always breathless blog, “the familiar patience game may test yours!”
Pokémon Go, Candy Crush, Angry Birds, Farmville, Minesweeper, Space Invaders, Pong: computer games come and go, offering fleeting amusements before they turn stale.
But not solitaire. Solitaire endures.
Invented sometime in the eighteenth century, the single-player card game made a seamless leap to virtuality with the arrival of personal computers in the early 1980s. The gameplay was easy to program, and a deck of cards could be represented on even the most rudimentary of computer displays. Spectrum Holobyte’s Solitaire Royal became a huge hit when it was released in 1987. After Microsoft incorporated its own version of the game into the Windows operating system in 1990, solitaire quickly became the most used PC app of all time.
“Though on its face it might seem trivial, pointless, a terrible way to waste a beautiful afternoon, etc., solitaire has unquestionably transformed the way we live and work,” wrote Slate’s Josh Levin in 2008. “Computer solitaire propelled the revolution of personal computing, augured Microsoft’s monopolistic tendencies, and forever changed office culture.”
Google is late to the party, but it’s a party that will never end.
Microsoft had ulterior motives when it bundled solitaire into Windows — the game helped people learn how to use a mouse, and it kept them sitting in front of their Microsoft-powered computers like, to quote Iggy Pop, hypnotized chickens — and Google, too, is looking to accomplish something more than just injecting a little fun into our weary lives. “A minor move like putting games in search means that users – especially mobile users – will turn to the Google search app at a time when a lot of the information we need is available elsewhere on our devices,” reports TechCrunch.
It’s a devious game these companies play. We are but deuces in their decks.
Would it be too much of a stretch to suggest that solitaire is a perfect microcosm of personal computing, particularly now, in our social media age? In “The Psychology of Games,” a 2000 article in Psychology Review, Mark Griffiths pointed out that games are a “world-building activity.” They offer a respite from the demands of the real. “Freud was one of the first people to concentrate on the functions of playing games,” Griffiths wrote. “He speculated that game playing provided a temporary leave of absence from reality which reduced individual conflict and brought about a change from the passive to the active.” We love games because they “offer the illusion of control over destiny and circumstance.”
Solitaire, a game mixing skill and chance, also provides what psychologists call “intermittent reinforcement.” Every time a card is revealed, there is, for the player, the possibility of a reward. The suspense, and the yearning, is what makes the game so compelling, even addictive. “Basically,” wrote Griffiths, “people keep playing in the absence of a reward hoping that another reward is just around the corner.” Turning over an ace in solitaire is really no different from getting a like on Facebook or a retweet on Twitter. We crave such symbolic tokens of accomplishment, such sweet nothings.
Shuffle that deck again, Google. This time I’m going to be a winner.
It’s my longest, funniest book yet — granted, the competition was not exactly fierce on either count — and it is now printed, bound, and on its way to a bookstore near you. The title is Utopia Is Creepy . . . and Other Provocations, and the book collects my favorite posts published here at Rough Type since the blog launched in 2005, along with a selection of essays, aphorisms, and reviews that appeared over the same period. It also features a couple of new pieces, including one on transhumanism called “The Daedalus Mission.”
As I was pulling the collection together over the last year, I began to see it as an alternative history of recent times, from the founding of Facebook to the rise of @realDonaldTrump. It is, as well, a critique of Silicon Valley and its cultural powers and pretensions. Here’s a peek at the introduction:
Utopia Is Creepy is out on September 6. More information, including those all-important preorder links, can be found here.
Thanks to all who have read Rough Type over the years.
“Instagram shows us what a world without art looks like.” –Theses in Tweetform, #19
Ricky D’Ambrose, in “Instagram and the Fantasy of of Mastery,” a mournful essay in The Nation, examines what he sees as a fundamental shift in aesthetics: “the transition from art, long vaunted as a special, and autonomous, area of sensuous intelligence, to creativity, to which art can only ever be superficially related.” Society’s love for the overlay, the template, the filter, is on the rise, inexorably it seems. In place of a personal style born of a mastery of technique, we have the instant application of a “look,” a set of easily recognizable visual tropes, usually borrowed either from an earlier artist’s style or from the output of an earlier creative technology, executed through a software routine. The McCabe & Mrs. Miller look. The Brownie 127 look. The Ms. Pac-Man look. Looks take the work, and the anxiety, out of art.
With looks, there is no time for squinting, no time for whatever is, or might be, inexplicable. A look—insofar as it has any resemblance to style at all—is a kind of instant style: quickly executed and dispatched, immediately understood, overcharged with incident. To say that a film, a photograph, a painting, or a room’s interior has a look is to assume a consensus about which parts of a nascent image are the most worthy of being parceled out and reproduced on a massive scale. It means making a claim about how familiar an image is, and how valuable it seems.
The shift from style to look is abetted by technology, in particular the infinite malleability of the digital artifact, but it seems to spring from a deeper source: our postmodern cultural exhaustion, with its attendant sense that fabrication is the defining quality of art and that all fabrications are equal in their fabricatedness. As the erstwhile taste-making class becomes ever more uncomfortable with the concept of taste, a concept now weighted with the deadly sins of elitism and privilege, the middlebrow becomes the new highbrow. The egalitarianism of the digital filter makes it a particularly attractive refuge for the antsy flâneur.
An insidious quality of the aesthetic of the look is, as D’Ambrose notes, its insatiable retrospective hunger. It gobbles up the past as well as the present. The very style that gave rise to a look comes to be seen as just another manifestation of the look: “One can now watch John Cassavetes’s A Woman Under the Influence just as one watches Joe Swanberg’s recent Happy Christmas: in quotation marks. (Both have ‘the 16-millimeter look.’) The look and its source become, in the mind of the viewer who knows the corresponding filter, identical.” The exercise of taste, like the exercise of creativity, becomes a matter of choosing the correct filter.
The phenomenon isn’t limited to the visual arts. Popular music also increasingly has a digitally constructed “look.” Writing is trickier, more resistant to programming than image or sound, but it’s not impossible to imagine a new breed of word processor able to apply a literary filter to a person’s words. A Poe filter. A Goethe filter. A Slouching Towards Bethlehem filter. Instagram for prose: surely somebody’s working on it.
Should augmented reality take off, we’ll be able to rid ourselves of artists and their demands once and for all. We’ll all be free to exercise our full, transformative creativity as observers and consumers, imposing a desired look on the world around us. Blink once for sepia-tinged. Blink twice for noir. Already there are earbuds in testing that allow you to tweak the sound of a concert you’re attending. They’re controlled by an app that includes, reports Motherboard, “a bunch of custom sound settings like ‘dirty country,’ ‘8-track,’ ‘Carnegie Hall,’ or ‘small studio.'” Sean Yeaton, of the band Parquet Courts, admitted “it could be cool to match your soundscape to your mood in mundane settings like the grocery store, but [he] balked at the idea of giving the audience control over the live sound at concerts. He pointed out that it would be pretty fucked up to go see Nine Inch Nails only to make it sound like Jefferson Starship.”
I guess your perspective depends on which side of the filter you happen to be on.
When news spread last week about the fatal crash of a computer-driven Tesla, I thought of a conversation I had a couple of years ago with a top computer scientist at Google. We were talking about some recent airliner crashes caused by “automation complacency” — the tendency for even very skilled pilots to tune out from their work after turning on autopilot systems — and the Google scientist noted that the problem of automation complacency is even more acute for drivers than for pilots. If you’re flying a plane and something unexpected happens, you usually have several seconds or even minutes to respond before the situation becomes dire. If you’re driving a car, you may have only a second or a fraction of a second to take action before you collide with another car, or a bridge abutment, or a tree. There are far more obstacles on the ground than in the sky.
With the Tesla accident, the evidence suggests that the crash happened before the driver even realized that he was about to hit a truck. He seemed to be suffering from automation complacency up to the very moment of impact. He trusted the machine, and the machine failed him. Such complacency is a well-documented problem in human-factors research, and it’s what led Google to change the course of its self-driving car program a couple of years ago, shifting to a perhaps quixotic goal of total automation without any human involvement. In rushing to give drivers the ability to switch on an “Autopilot” mode, Tesla ignored or dismissed the research, with a predictable result. As computer and car companies push the envelope of automotive automation, driver complacency and skill loss promise to become ever greater challenges — ones that (as Google appears to have concluded) may not be solvable given the fallibility of software, the psychology of human beings, and the realities of driving.*
Following is a brief excerpt from my book about the human consequences of automation, The Glass Cage, that describes how, as aviation became more automated over the years, pilots flying in so-called glass cockpits grew more susceptible to automation complacency and “skill fade” — to the point that the FAA is now urging pilots to practice manual flying more often.
Premature death was a routine occupational hazard for even the most expert pilots during aviation’s early years. Lawrence Sperry died in 1923 when his plane crashed into the English Channel. Wiley Post died in 1935 when his plane went down in Alaska. Antoine de Saint-Exupéry died in 1944 when his plane disappeared over the Mediterranean. Air travel’s lethal days are, mercifully, behind us. Flying is safe now, and pretty much everyone involved in the aviation business believes that advances in automation are one of the reasons why. Together with improvements in aircraft design, airline safety routines, crew training, and air traffic control, the mechanization and computerization of flight have contributed to the sharp and steady decline in accidents and deaths over the decades.
But this sunny story carries a dark footnote. The overall decline in the number of plane crashes masks the recent arrival of “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and one of the world’s leading authorities on automation. When onboard computer systems fail to work as intended or other unexpected problems arise during a flight, pilots are forced to take manual control of the plane. Thrust abruptly into a dangerous situation, they too often make mistakes. The consequences, as the 2009 Continental Connection and Air France disasters show, can be catastrophic. Over the last thirty years, dozens of psychologists, engineers, and human factors researchers have studied what’s gained and lost when pilots share the work of flying with software. They’ve learned that a heavy reliance on computer automation can erode pilots’ expertise, dull their reflexes, and diminish their attentiveness, leading to what Jan Noyes, a human-factors expert at Britain’s University of Bristol, calls “a deskilling of the crew.”
Concerns about the unintended side effects of flight automation aren’t new. They date back at least to the early days of glass cockpits and fly-by-wire controls. A 1989 report from NASA’s Ames Research Center noted that as computers had begun to multiply on airplanes during the preceding decade, industry and governmental researchers “developed a growing discomfort that the cockpit may be becoming too automated, and that the steady replacement of human functioning by devices could be a mixed blessing.” Despite a general enthusiasm for computerized flight, many in the airline industry worried that “pilots were becoming over-dependent on automation, that manual flying skills may be deteriorating, and that situational awareness might be suffering.”
Studies conducted since then have linked many accidents and near misses to breakdowns of automated systems or to automation complacency or other “automation-induced errors” on the part of flight crews. In 2010, the FAA released preliminary results of a major study of airline flights over the preceding ten years which showed that pilot errors had been involved in nearly two-thirds of all crashes. The research further indicated, according to FAA scientist Kathy Abbott, that automation has made such errors more likely. Pilots can be distracted by their interactions with onboard computers, Abbott said, and they can “abdicate too much responsibility to the automated systems.” An extensive 2013 government report on cockpit automation, compiled by an expert panel and drawing on the same FAA data, implicated automation-related problems, such as a complacency-induced loss of situational awareness and weakened hand-flying skills, in more than half of recent accidents.
The anecdotal evidence collected through accident reports and surveys gained empirical backing from a rigorous study conducted by Matthew Ebbatson, a young human-factors researcher at Cranfield University, a top U.K. engineering school. Frustrated by the lack of hard, objective data on what he termed “the loss of manual flying skills in pilots of highly automated airliners,” Ebbatson set out to fill the gap. He recruited sixty-six veteran pilots from a British airline and had each of them get into a flight simulator and perform a challenging maneuver—bringing a Boeing 737 with a blown engine in for a landing during bad weather. The simulator disabled the plane’s automated systems, forcing the pilot to fly by hand. Some of the pilots did exceptionally well in the test, Ebbatson reported, but many performed poorly, barely exceeding “the limits of acceptability.”
Ebbatson then compared detailed measures of each pilot’s performance in the simulator—the pressure exerted on the yoke, the stability of airspeed, the degree of variation in course—with the pilot’s historical flight record. He found a direct correlation between a pilot’s aptitude at the controls and the amount of time that pilot had spent flying without the aid of automation. The correlation was particularly strong with the amount of manual flying done during the preceding two months. The analysis indicated that “manual flying skills decay quite rapidly towards the fringes of ‘tolerable’ performance without relatively frequent practice.” Particularly “vulnerable to decay,” Ebbatson noted, was a pilot’s ability to maintain “airspeed control”—a skill crucial to recognizing, avoiding, and recovering from stalls and other dangerous situations.
It’s no mystery why automation degrades pilot performance. Like many challenging jobs, flying a plane involves a combination of psychomotor skills and cognitive skills—thoughtful action and active thinking. A pilot needs to manipulate tools and instruments with precision while swiftly and accurately making calculations, forecasts, and assessments in his head. And while he goes through these intricate mental and physical maneuvers, he needs to remain vigilant, alert to what’s going on around him and able to distinguish important signals from unimportant ones. He can’t allow himself either to lose focus or to fall victim to tunnel vision. Mastery of such a multifaceted set of skills comes only with rigorous practice. A beginning pilot tends to be clumsy at the controls, pushing and pulling the yoke with more force than necessary. He often has to pause to remember what he should do next, to walk himself methodically through the steps of a process. He has trouble shifting seamlessly between manual and cognitive tasks. When a stressful situation arises, he can easily become overwhelmed or distracted and end up overlooking a critical change in circumstances.
In time, after much rehearsal, the novice gains confidence. He becomes less halting in his work and more precise in his actions. There’s little wasted effort. As his experience continues to deepen, his brain develops so-called mental models—dedicated assemblies of neurons—that allow him to recognize patterns in his surroundings. The models enable him to interpret and react to stimuli intuitively, without getting bogged down in conscious analysis. Eventually, thought and action become seamless. Flying becomes second nature. Years before researchers began to plumb the workings of pilots’ brains, Wiley Post described the experience of expert flight in plain, precise terms. He flew, he said in 1935, “without mental effort, letting my actions be wholly controlled by my subconscious mind.” He wasn’t born with that ability. He developed it through hard work.
When computers enter the picture, the nature and the rigor of the work change, as does the learning the work engenders. As software assumes moment-by-moment control of the craft, the pilot is relieved of much manual labor. This reallocation of responsibility can provide an important benefit. It can reduce the pilot’s workload and allow him to concentrate on the cognitive aspects of flight. But there’s a cost. Psychomotor skills get rusty, which can hamper the pilot on those rare but critical occasions when he’s required to take back the controls. There’s growing evidence that recent expansions in the scope of automation also put cognitive skills at risk. When more advanced computers begin to take over planning and analysis functions, such as setting and adjusting a flight plan, the pilot becomes less engaged not only physically but also mentally. Because the precision and speed of pattern recognition appear to depend on regular practice, the pilot’s mind may become less agile in interpreting and reacting to fast-changing situations. He may suffer what Ebbatson calls “skill fade” in his mental as well as his motor abilities.
Pilots are not blind to automation’s toll. They’ve always been wary about ceding responsibility to machinery. Airmen in World War I, justifiably proud of their skill in maneuvering their planes during dogfights, wanted nothing to do with the newfangled Sperry autopilots. In 1959, the original Mercury astronauts rebelled against NASA’s plan to remove manual flight controls from spacecraft. But aviators’ concerns are more acute now. Even as they praise the enormous gains in flight technology, and acknowledge the safety and efficiency benefits, they worry about the erosion of their talents. As part of his research, Ebbatson surveyed commercial pilots, asking them whether “they felt their manual flying ability had been influenced by the experience of operating a highly automated aircraft.” More than three-fourths reported that “their skills had deteriorated”; just a few felt their skills had improved. A 2012 pilot survey conducted by the European Aviation Safety Agency found similarly widespread concerns, with 95 percent of pilots saying that automation tended to erode “basic manual and cognitive flying skills.”
Rory Kay, a long-time United Airlines captain who until recently served as the top safety official with the Air Line Pilots Association, fears the aviation industry is suffering from “automation addiction.” In a 2011 interview with the Associated Press, he put the problem in stark terms: “We’re forgetting how to fly.”
What the aviation industry has discovered is that there’s a tradeoff between computer automation and human skill and attentiveness. Getting the balance right is exceedingly tricky. Just because some degree of automation is good, that doesn’t mean that more automation is necessarily better. We seem fated to learn this hard lesson once again with the even trickier process of automotive automation.
*UPDATE (7/7): The Times reports: “Experiments conducted last year by Virginia Tech researchers and supported by the national safety administration found that it took drivers of [self-driving] cars an average of 17 seconds to respond to takeover requests. In that period, a vehicle going 65 m.p.h. would have traveled 1,621 feet — more than five football fields.”
Will Davies cuts through the prevailing emotionalism in dissecting the Brexit vote:
The Remain campaign continued to rely on forecasts, warnings and predictions, in the hope that eventually people would be dissuaded from ‘risking it’. But to those that have given up on the future already, this is all just more political rhetoric. In any case, the entire practice of modelling the future in terms of ‘risk’ has lost credibility, as evidenced by the now terminal decline of opinion polling as a tool for political control. …
In place of facts, we now live in a world of data. Instead of trusted measures and methodologies being used to produce numbers, a dizzying array of numbers is produced by default, to be mined, visualised, analysed and interpreted however we wish. If risk modelling (using notions of statistical normality) was the defining research technique of the 19th and 20th centuries, sentiment analysis is the defining one of the emerging digital era. We no longer have stable, ‘factual’ representations of the world, but unprecedented new capacities to sense and monitor what is bubbling up where, who’s feeling what, what’s the general vibe. …
As the 23rd June turned into 24th June, it became manifestly clear that prediction markets are little more than an aggregative representation of the same feelings and moods that one might otherwise detect via twitter. They’re not in the business of truth-telling, but of mood-tracking.