The alchemist’s delusion

AlchemyLab

Perhaps it was the altitude, but Eric Schmidt really outdid himself yesterday at Davos:

I say this with almost complete seriousness. Almost all of the problems we debate can be solved literally with more broadband connectivity in these countries. And the reason is, broadband is how you address the governance issues, the information issues, the education issues, the personal security issues, the human rights issues, the women’s empowerment issues.

Call it the alchemist’s delusion. Langdon Winner has described the affliction well:

To be specific: the arrival of any new technology that has significant power and practical potential always brings with it a wave of visionary enthusiasm that anticipates the rise of a utopian social order. Surely the coming of this machine, this new device, this technical novelty will revitalize democracy. Surely its properties will foster greater equality and widespread prosperity throughout the land. Surely it will distribute political power more broadly and empower citizens to act for themselves. Surely it will cause us to cultivate new and better selves, becoming larger and more magnanimous people than we have been before. And surely it will connect individuals and groups in ways that will produce greater social harmony and a relaxation of human conflict. From the coming of the steam locomotive, to the introduction of the telegraph, telephone, motion pictures, centrally generated electrical power, automobile, radio, television, nuclear power, guided missile, and the computer (to name just a few), this has been the recurring theme: celebrate! The moment of redemption is at hand. …

The very language used to convey the message insists that the wondrous blessing on the horizon is ineluctable. So great is its power and glory that any demand for negotiations about exactly which technology will be introduced, by whom, and in what form is mere impudence. Only a fool would ask to see the fine print, examine the blueprints, or check the credentials of the planners.

From Davos Man to Davos Bot?

spacely

In an age of robotic decision-making, are CEOs necessary? It’s a question that needs to be asked, and Frank Pasquale is asking it:

When BART workers went on strike, Silicon Valley worthies threatened to replace them with robots. But one could just as easily call for the venture capitalists to be replaced with algorithms. Indeed, one venture capital firm added an algorithm to its board in 2013. Travis Kalanick, the CEO of Uber, responded to a question on driver wage demands by bringing up the prospect of robotic drivers. But given Uber’s multiple legal and PR fails in 2014, a robot would probably would have done a better job running the company than Kalanick. …

Thiel Fellow and computer programming prodigy Vitaly Bukherin has stated that automation of the top management functions at firms like Uber and AirBnB would be “trivially easy.” Automating the automators may sound like a fantasy, but it is a natural outgrowth of mantras (e.g., “maximize shareholder value”) that are commonplaces among the corporate elite. To attract and retain the support of investors, a firm must obtain certain results, and the short-run paths to attaining them (such as cutting wages, or financial engineering) are increasingly narrow.  And in today’s investment environment of rampant short-termism, the short is often the only term there is.

Just as the killer business app for the personal computer was the spreadsheet, so the killer business app for artificial intelligence may turn out to be the algorithmic CEO. One seems to follow from the other, like an oak from an acorn. The Jetsons, you see, got it wrong: It’s not Rosie who turns into the robot; it’s Mr. Spacely.

Does innovation arc toward decadence?

arc

Three years ago, I posted a piece here titled “The Hierarchy of Innovation,” which argued, speculatively, that the focus of innovation has followed Abraham Maslow’s hierarchy of needs, beginning with Technologies of Survival and now concentrating on Technologies of the Self. With The Glass Cage done, I’ve decided to return to this idea with hopes of fleshing it out. I’m republishing my original post below and am soliciting your comments about it. Thanks.

“If you could choose only one of the following two inventions, indoor plumbing or the Internet, which would you choose?” -Robert J. Gordon

Justin Fox is the latest pundit to ring the “innovation ain’t what it used to be” bell. “Compared with the staggering changes in everyday life in the first half of the 20th century,” he writes, summing up the argument, “the digital age has brought relatively minor alterations to how we live.” Fox has a lot of company. He points to sci-fi author Neal Stephenson, who worries that the Internet, far from spurring a great burst of creativity, may have actually put innovation “on hold for a generation.” Fox also cites economist Tyler Cowen, who has argued that, recent techno-enthusiasm aside, we’re living in a time of innovation stagnation. He could also have mentioned tech powerbroker Peter Thiel, who believes that large-scale innovation has gone dormant and that we’ve entered a technological “desert.” Thiel blames the hippies:

Men reached the moon in July 1969, and Woodstock began three weeks later. With the benefit of hindsight, we can see that this was when the hippies took over the country, and when the true cultural war over Progress was lost.

The original inspiration for such grousing – about progress, not about hippies – came from Robert J. Gordon, a Northwestern University economist whose 2000 paper “Does the ‘New Economy’ Measure Up to the Great Inventions of the Past?” included a damning comparison of the flood of inventions that occurred a century ago with the seeming trickle that we see today. Consider the new products invented in just the ten years between 1876 and 1886: internal combustion engine, electric lightbulb, electric transformer, steam turbine, electric railroad, automobile, telephone, movie camera, phonograph, linotype, roll film (for cameras), dictaphone, cash register, vaccines, reinforced concrete, flush toilets. The typewriter had arrived a few years earlier and the punch-card tabulator would appear a few years later. And then, in short order, came airplanes, radio, air conditioning, the vacuum tube, jet aircraft, television, refrigerators and a raft of other home appliances, as well as revolutionary advances in manufacturing processes. (And let’s not forget The Bomb.) The conditions of life changed utterly between 1890 and 1950, observed Gordon. Between 1950 and today? Not so much.

So why is innovation less impressive today? Maybe Thiel is right, and it’s the fault of hippies, liberals, and other degenerates. Or maybe it’s crappy education. Or a lack of corporate investment in research. Or short-sighted venture capitalists. Or overaggressive lawyers. Or imagination-challenged entrepreneurs. Or maybe it’s a catastrophic loss of mojo. But none of these explanations makes much sense. The aperture of science grows ever wider, after all, even as the commercial and reputational rewards for innovation grow ever larger and the ability to share ideas grows ever stronger. Any barrier to innovation should be swept away by such forces.

Let me float an alternative explanation: There has been no decline in innovation; there has just been a shift in its focus. We’re as creative as ever, but we’ve funneled our creativity into areas that produce smaller-scale, less far-reaching, less visible breakthroughs. And we’ve done that for entirely rational reasons. We’re getting precisely the kind of innovation that we desire – and that we deserve.

My idea – and it’s a rough one – is that there’s a hierarchy of innovation that runs in parallel with Abraham Maslow’s famous hierarchy of needs. Maslow argued that human needs progress through five stages, with each new stage requiring the fulfillment of lower-level, or more basic, needs. So first we need to meet our most primitive Physiological needs, and that frees us to focus on our needs for Safety, and once our needs for Safety are met, we can attend to our needs for Belongingness, and then on to our needs for personal Esteem, and finally to our needs for Self-Actualization. If you look at Maslow’s hierarchy as an inflexible structure, with clear boundaries between its levels, it falls apart. Our needs are messy, and the boundaries between them are porous. A caveman probably pursued self-esteem and self-actualization, to some degree, just as we today spend effort seeking to fulfill our physical needs. But if you look at the hierarchy as a map of human focus, or of emphasis, then it makes sense – and indeed seems to be born out by history. In short: The more comfortable you are, the more time you spend thinking about yourself.

If progress is shaped by human needs, then general shifts in needs would also bring shifts in the nature of technological innovation. The tools we invent would move through the hierarchy of needs, from tools that help safeguard our bodies on up to tools that allow us to modify our internal states, from tools of survival to tools of the self. Here’s my crack at what the hierarchy of innovation looks like (click on the image to enlarge it):

hierarchy of innovation.jpg

The focus, or emphasis, of innovation moves up through five stages, propelled by shifts in the needs we seek to fulfill. In the beginning come Technologies of Survival (think fire), then Technologies of Social Organization (think cathedral), then Technologies of Prosperity (think steam engine), then technologies of leisure (think TV), and finally Technologies of the Self (think Facebook, or Prozac).

As with Maslow’s hierarchy, you shouldn’t look at my hierarchy as a rigid one. Innovation today continues at all five levels. But the rewards, both monetary and reputational, are greatest at the highest level (Technologies of the Self), which has the effect of shunting investment, attention, and activity in that direction. We’re already physically comfortable, so getting a little more physically comfortable doesn’t seem particularly pressing. We’ve become inward looking, and what we crave are more powerful tools for modifying our internal state or projecting that state outward. An entrepreneur has a greater prospect of fame and riches if he creates, say, a popular social-networking tool than if he creates a faster, more efficient system for mass transit. The arc of innovation, to put a dark spin on it, is toward decadence.

One of the consequences is that, as we move to the top level of the innovation hierarchy, the inventions have less visible, less transformative effects. We’re no longer changing the shape of the physical world or even of society, as it manifests itself in the physical world. We’re altering internal states, transforming the invisible self. Not surprisingly, when you step back and take a broad view, it looks like stagnation – it looks like nothing is changing very much. That’s particularly true when you compare what’s happening today with what happened a hundred years ago, when our focus on Technologies of Prosperity was peaking and our focus on Technologies of Leisure was also rapidly increasing, bringing a highly visible transformation of our physical circumstances.

If the current state of progress disappoints you, don’t blame innovation. Blame yourself.

Image: Anjan Chatterjee.

Give ’em enough Twitter

From his perch in Silicon Valley, cub economist Marc Andreessen offers a brilliant new argument in favor of income inequality:

marc

You see, it’s ok to give raises to the wealthy, because the wealthy don’t produce the “things” that “lower-income consumers” need to buy. But you shouldn’t increase the wages of lower-income workers involved in the production of “things,” since they’re going to spend most of their money on those “things.” In other words: Pay the poor less, and they’ll feel richer. Sweet!

A crisis in control

frayed

“What do you think about machines that think?” That’s the Edge question of the year for 2015. Here’s my reply.

Machines that think think like machines. That fact may disappoint those who look forward, with dread or longing, to a robot uprising. For most of us, it is reassuring. Our thinking machines aren’t about to leap beyond us intellectually, much less turn us into their servants or pets. They’re going to continue to do the bidding of their human programmers.

Much of the power of artificial intelligence stems from its very mindlessness. Immune to the vagaries and biases that attend conscious thought, computers can perform their lightning-quick calculations without distraction or fatigue, doubt or emotion. The coldness of their thinking complements the heat of our own.

Where things get sticky is when we start looking to computers to perform not as our aids but as our replacements. That’s what’s happening now, and quickly. Thanks to advances in artificial-intelligence routines, today’s thinking machines can sense their surroundings, learn from experience, and make decisions autonomously, often at a speed and with a precision that are beyond our own ability to comprehend, much less match. When allowed to act on their own in a complex world, whether embodied as robots or simply outputting algorithmically derived judgments, mindless machines carry enormous risks along with their enormous powers. Unable to question their own actions or appreciate the consequences of their programming — unable to understand the context in which they operate — they can wreak havoc, either as a result of flaws in their programming or through the deliberate aims of their programmers.

We got a preview of the dangers of autonomous software on the morning of August 1, 2012, when Wall Street’s biggest trading outfit, Knight Capital, switched on a new, automated program for buying and selling shares. The software had a bug hidden in its code, and it immediately flooded exchanges with irrational orders. Forty-five minutes passed before Knight’s programmers were able to diagnose and fix the problem. Forty-five minutes isn’t long in human time, but it’s an eternity in computer time. Oblivious to its errors, the software made more than four million deals, racking up $7 billion in errant trades and nearly bankrupting the company. Yes, we know how to make machines think. What we don’t know is how to make them thoughtful.

All that was lost in the Knight fiasco was money. As software takes command of more economic, social, military, and personal processes, the costs of glitches, breakdowns, and unforeseen effects will only grow. Compounding the dangers is the invisibility of software code. As individuals and as a society, we increasingly depend on artificial-intelligence algorithms that we don’t understand. Their workings, and the motivations and intentions that shape their workings, are hidden from us. That creates an imbalance of power, and it leaves us open to clandestine surveillance and manipulation. Last year we got some hints about the ways that social networks conduct secret psychological tests on their members through the manipulation of information feeds. As computers become more adept at monitoring us and shaping what we see and do, the potential for abuse grows.

During the nineteenth century, society faced what the late historian James Beniger described as a “crisis of control.” The technologies for processing matter had outstripped the technologies for processing information, and people’s ability to monitor and regulate industrial and related processes had in turn broken down. The control crisis, which manifested itself in everything from train crashes to supply-and-demand imbalances to interruptions in the delivery of government services, was eventually resolved through the invention of systems for automated data processing, such as the punch-card tabulator that Herman Hollerith built for the U.S. Census Bureau. Information technology caught up with industrial technology, enabling people to bring back into focus a world that had gone blurry.

Today, we face another control crisis, though it’s the mirror image of the earlier one. What we’re now struggling to bring under control is the very thing that helped us reassert control at the start of the twentieth century: information technology. Our ability to gather and process data, to manipulate information in all its forms, has outstripped our ability to monitor and regulate data processing in a way that suits our societal and personal interests. Resolving this new control crisis will be one of the great challenges in the years ahead. The first step in meeting the challenge is to recognize that the risks of artificial intelligence don’t lie in some dystopian future. They are here now.

Image: Jean Mottershead.

Glass Cage hits Blighty

The UK edition of The Glass Cage comes out tomorrow, sporting a different cover and subtitle:

uk glass cage cover

I’ve been gratified by the early reviews in the British press. Here are some choice bits:

Bill Thompson in BBC Focus magazine: “My copy of this excellent book is so thoroughly scribbled on that I’d simply never be able to get rid of it. I’ve circled lots of stuff I agree — or disagree — with, and added exclamation marks to insights that I want to explore more deeply. … The Glass Cage is infused with a humanist perspective that puts people and their needs at the centre of the argument around automation and the alienation created by many modern systems. … So put down your phone, take off your Google Glass and read this.”

Ian Critchley in The Sunday Times: “[Carr] recognizes that machines have freed us from the burden of many mundane tasks. His argument, though, is that the balance has tipped too far. Automation has taken over some of the activities that challenged us and strengthened our connection to the environment. … His book is a valuable corrective to the belief that technology will cure all ills, and a passionate plea to keep machines the servants of humans, not the other way around.”

Richard Waters in The Financial Times: “Nicholas Carr is not a technophobe. But in The Glass Cage he brings a much-needed humanistic perspective to the wider issues of automation. In an age of technological marvels, it is easy to forget the human. … How to achieve a more balanced view of progress when all of today’s incentives are geared towards an ever-faster cycle of invention and deployment of new technologies? There is no room for an answer in this wide-ranging book. As ever, though, Carr’s skill is in setting the debate running, not finding answers.”

John Preston in The Telegraph: “What exactly has automation done for us? Has it freed people from drudgery and made them happier? Or has it, as Nicholas Carr wonders in this elegantly persuasive book, had the opposite effect, transforming us into passive zombies, helplessly reliant on machines to tell us what to do? … [Carr is] no Luddite who thinks that we would all be better off living in holes in the ground and making our own woad. Instead, in his thoughtful, non-strident way, he’s simply pointing out that the cost of automation may be far higher than we have realised.”

Giles Whittell in The Times: “An important book that a lot of people won’t want to take seriously, but should. … [Carr] has a deep and valuable fear of techno-emasculation. It’s a fear based on evidence but also intuition.”

Carole Cadwalladr in The Observer: “Provocative … Who is it serving, this new technology, asks Carr. Us? Or the companies that make billions from it? Billions that have shown no evidence of trickling down. The question shouldn’t be ‘who cares?’ he says at one point. It should be: how far from the world do we want to retreat?”

Jasmine Gardner in the Evening Standard: “Carr argues, very convincingly, that automation is eroding our memory while simultaneously creating a complacency within us that will diminish our ability to gain new skills.”

The Bookseller: “An eye-opening exposé of how automation is altering our ability to solve problems, forge memories and acquire skills.”

The needle and the damage done

4track

“Who cares about science? This is music. We’re talking about how you feel.” So said Neil Young in introducing his high-resolution Pono player. Good luck, Neil, but I fear you’re a little downstream. In the end it’s more about the recording than the playback. This is from Tom Whitwell’s article “Why Do All Records Sound the Same?”:

What makes working with Pro Tools really different from tape is that editing is absurdly easy. Most bands record to a click track, so the tempo is locked. If a guitarist plays a riff fifty times, it’s a trivial job to pick the best one and loop it for the duration of the verse.

“Musicians are inherently lazy,” says John [Leckie]. “If there’s an easier way of doing something than actually playing, they’ll do that.” A band might jam together for a bit, then spend hours or days choosing the best bits and pasting a track together. All music is adopting the methods of dance music, of arranging repetitive loops on a grid. With the structure of the song mapped out in coloured boxes on screen, there’s a huge temptation to fill in the gaps, add bits and generally clutter up the sound.

This is also why you no longer hear mistakes on records. Al Kooper’s shambolic Hammond organ playing on “Like A Rolling Stone” could never happen today because a diligent producer would discreetly shunt his chords back into step. Then there’s tuning. Until electronic guitar tuners appeared around 1980, the band would tune by ear to the studio piano. Everyone was slightly off, but everyone was listening to the pitch of their instrument, so they were musically off.

(Meanwhile, back at the ranch.)

Image: John Vincent.