E-reading after the e-reader

E-readers like the original Kindle and the original Nook did a pretty good job of replicating the experience of reading a printed page — and that was one of their big selling points. When Amazon introduced the first edition of the Kindle late in 2007, the company went out of its way to emphasize the device’s “paper-like” screen.

likepaper

The black-and-white E Ink display, CEO Jeff Bezos said in a marketing video, “doesn’t look like any computer screen you’ve ever seen. It looks like paper.” The point was underscored in the video by some best-selling writers, who praised the Kindle’s resemblance to a book. “It looks like ink,” said Michael Lewis. “It’s not like reading a computer screen. The genius of the thing is you don’t notice that much difference between reading on the screen and reading in a book.” The Kindle was presented as a specialized gadget, designed specifically for displaying the text of books and other written works. “Amazon Kindle is for reading,” Bezos declared. In designing it, he said, the company “focused on simplicity and making reading as great as it can be.”

The “paper-like” pitch was intended to encourage book buyers to give e-books a try. And it worked. But there was reason to believe, even back in 2007, that the specialized e-reader was doomed. Single-purpose computers, particularly networked ones, have a hard time competing against general-purpose computers once the more versatile machines incorporate the more limited ones’ specialized functionality. A single-purpose device tends, in other words, to morph from a piece of dedicated hardware into a simple application running alongside other applications. Multitasking beats unitasking. Amazon itself, with its ambition to be not just a digital book seller but a digital media conglomerate, had a clear incentive to steadily push its customers away from e-readers and toward multipurpose devices able to display all the forms of media that it sells. Even when Bezos was stressing the original Kindle’s purity as a reading device, one had the suspicion that his ultimate ambition had little to do with replicating the simplicity and calm of ink on paper. The original Kindle was a bit of a Trojan horse.

Now we have evidence from the market that the specialized e-reader is indeed a transitional device. Sales of e-readers have already peaked. Last year, 23 million of them were sold. This year, sales have plummeted an estimated 36 percent, to just 15 million units, according to market researcher IHS. By 2016, IHS foresees e-reader sales dwindling to just 7 million units. Displacing the e-reader is, of course, the multipurpose tablet. As e-reader sales have fallen, tablet sales have exploded. About 140 million tablets will be sold this year, and the number is projected to approach 200 million in 2013. That’s despite the fact that tablets are considerably more expensive than e-readers. “Last year it seemed that the market might be big enough for both dedicated e-readers and tablets,” observes Technology Review‘s Mike Orcutt. “But now it appears the versatility of tablets is winning out.”

That also means that, when it comes to the reading of e-books, the once-vaunted “paper-like” screen is losing out to the computer screen, and the simplicity of a specialized reading device is losing out to the complexities and distractions of a general-purpose, networked computer. If book readers continue to shift from the page to the screen, as a new Pew study suggests is likely, the text of books will end up being displayed in a radically different setting from that of the printed or scribal page that has defined the book for the first 2,000 years of its existence. That doesn’t mean that readers won’t be able to immerse themselves in books anymore. The technology of a book is not rigidly deterministic. The skill of the writer still matters, as does the desire of readers to get lost in stories and arguments. But the importance of the technology in shaping reading habits (and publishing decisions), particularly over the long run, shouldn’t be discounted. If the technology of the page provided a barrier against the distractions of everyday life, encouraging the focus that is the essence of deep reading, the computer screen does the opposite. It inundates us with distractions, encourages the division of attention. It fights deep reading rather than promoting it.

For the last few years, we could tell ourselves that, as Michael Lewis put it, there wouldn’t be “that much difference between reading on the screen and reading in a book.” We’re not going to be able to tell ourselves that much longer. Whatever the future of the e-book may be, it seems pretty certain that it won’t be “paper-like.”

From counterculture to anticulture

From his Northern California perch, tech publisher Tim O’Reilly twitters about the future of books:

I don’t really give a shit if literary novels go away. They’re an elitist pursuit. And they’re relatively recent. The most popular author in the 1850s in the US wasn’t Herman Melville writing Moby-Dick, you know, or Nathaniel Hawthorne writing The House of the Seven Gables. It was Henry Wadsworth Longfellow writing long narrative poems that were meant to be read aloud. So the novel as we know it today is only a 200-year-old construct. And now we’re getting new forms of entertainment, new forms of popular culture.

This is so foolish and confused, so callous. It takes a remarkable degree of critical vacuity to suggest that because an art form is “relatively recent,” it lacks worth — that because the novel is “only a 200-year-old [sic] construct,” it’s somehow suspect, and disposable. And how sad and shallow to view the reading (or writing) of a book like Moby Dick as an exercise in elitism. It’s the antithesis of elitism.

Later in the interview, O’Reilly muses, “I think people in Silicon Valley don’t realize what a bubble they’re living in.” You can say that again, Tim.

Let them eat smartphones

“We are being afflicted,” wrote John Maynard Keynes in 1930, “with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come — namely, technological unemployment.” He elaborated:

This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour. But this is only a temporary phase of maladjustment. All this means in the long run is that mankind is solving its economic problem. I would predict that the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is to-day. There would be nothing surprising in this even in the light of our present knowledge. It would not be foolish to contemplate the possibility of a far greater progress still.

Indeed, Keynes thought it entirely possible that, by 2030, scientific and technological progress would have freed humankind from “the struggle for subsistence” and propelled us to “our destination of economic bliss.” Technology would be doing our jobs for us, and the economy would have spread material wealth to everyone. Our only problem at that point would be to figure out how to use our endless hours of leisure — to teach ourselves “to enjoy” rather than “to strive.”

We’re now within spitting distance of 2030, so a progress update on the Keynesian utopia would seem to be in order. A good place to start might be this chart from MIT’s Andrew McAfee:

For more than 30 years after World War II, McAfee observes, GDP, productivity, employment, and income all rose together in seeming lockstep. But in the early 80s, we began to see a “great decoupling,” with growth in employment and household income faltering even as output and productivity continued to shoot upward.

By the end of 2011, things had become much worse in two ways. First, median household income was actually lower than it was a decade earlier. In fact, it was lower than at any point since 1996. And second, the American job creation engine was sputtering badly. Between 1981 and 2001 the economy generated plenty of low-paying jobs. After 2001, though, it wasn’t even generating enough of these, and employment growth started to lag badly behind GDP and productivity growth.

What happened? It’s not entirely clear. Surely there are many forces at work. But McAfee argues that one of the reasons for the decoupling is that technological progress, particularly in the form of computerization, is pushing the economy’s bounties away from labor and toward capital:

Digital technologies have been able to do routine work for a while now. This allows them to substitute for less-skilled and -educated workers, and puts a lot of downward pressure on the median wage. As computers and robots get more and more powerful while simultaneously getting cheaper and more widespread this phenomenon spreads, to the point where economically rational employers prefer buying more technology over hiring more workers. In other words, they prefer capital over labor. This preference affects both wages and job volumes. And the situation will only accelerate as robots and computers learn to do more and more, and to take over jobs that we currently think of not as “routine,” but as requiring a lot of skill and/or education.

McAfee has been writing astutely on the economic consequences of computerization for some time, and I think it’s fair to say that, like Keynes before him, he’s an optimist when it comes to technology. He believes, or at least wants to believe, that we’re in another “temporary phase of maladjustment” and that we’ll be able to innovate our way out of it and recouple what’s been decoupled. But just how we get off the path we’re on, he admits, is far from clear: “it’s not going to be reversed by a couple quick policy fixes or even, I believe, by deeper changes to our educational and entrepreneurial systems.” We’ve entered a new “technological era,” and the old assumptions and solutions may not hold anymore.

It’s always been pretty clear that technological progress has an economic bias — that it tends to reward some folks more than others. Many economists have argued (and many politicians have assumed) that the bias is fundamentally “skill-based.” Progress rewards the skilled and punishes the unskilled. That’s a tough problem, but at least you know how to solve it: you broaden the reach and quality of education in order to shift more people into the skilled camp. But, as Paul Krugman recently suggested, what we might be seeing today is capital-biased technological change, which “tends to shift the distribution of income away from workers to the owners of capital.” That’s a harder nut to crack:

If this is the wave of the future, it makes nonsense of just about all the conventional wisdom on reducing inequality. Better education won’t do much to reduce inequality if the big rewards simply go to those with the most assets. Creating an “opportunity society” … won’t do much if the most important asset you can have in life is, well, lots of assets inherited from your parents.

The implications, writes Krugman, are “really uncomfortable.”

So maybe Keynes will be proved half-right. By 2030, technological progress will have freed the masses from their jobs, but it won’t have freed them from the struggle for subsistence.

Digital sharecropping, Kickstarter-style

It’s hard to predict how important Kickstarter, the buzzy crowdfunding site, will ultimately prove to be (ask me in three years), but, for the moment, it seems to deserve all the accolades it’s scooping up. The bake-sale-at-web-scale operation is reportedly distributing cash donations to artists and other creative types worth about as much as the the total funding doled out by the National Endowment for the Arts — and Kickstarter is growing in a way the NEA is not. At a time when taxpayers and politicians are cutting back public support of the arts and many creative industries are in a state of uncertainty, if not disarray, having a simple means of pitching a project to millions of generous would-be donors seems like a godsend. The fact that Kickstarter is a for-profit company backed by millions in venture capital makes the phenomenon seem all the sweeter in a way. The rich capitalist and the struggling artist lie down peacefully together in the Internet meadow, like the fabled lion and lamb.

To all appearances, Kickstarter provides a welcome relief from the pervasive Web 2.0 digital-sharecropping business model, in which a company like Facebook gives people a little plot of online turf and a set of tools to work it, and then, through ad sales or other commercial means, reaps the monetary rewards from the combined labor of all its sharecroppers. The “users” who “generate content” for social media companies may attract attention or prestige as a result of the work they do, or simply gain the satisfaction of being part of a community, but what they don’t get is paid. Kickstarter, by contrast, gives the bulk of the cash it generates to the creators, with the company and its partners pocketing just a modest vig, a mere 10 percent of the proceeds. When we sign up with Kickstarter, it feels liberating, writes Josh MacPhee in the new issue of The Baffler, because we’re “rejecting the usual game of winners and losers that comes with capitalism and turning to a model that allows everyone to win — one that combines the freedom of self-employment with the shared experiences of community building.”

But is it really so simple? After MacPhee recently raised money for a project through Kickstarter, he began to get a little suspicious of the company. In his article, he peels back the layers of Kickstarter’s economic onion, and what he discovers is something a good deal less pure and a good deal more complicated than what appears on the surface. The company, he writes, “cultivates the illusion that when you use its fundraising tools, you are opting out of wage labor.” But in fact Kickstarter “manages goods, services, and labor in ways that are quite familiar”:

The Kickstarter platform and website might not look like a shop floor, but when you are there, you are working. The exchange goes like this: rather than work for a wage with minimum protections and some semblance of benefits, you marshal all your friends, and their friends, to ante up small amounts of money for your project. If you reach your goal, you get to keep the money you raise, but Kickstarter peels off a dime for every dollar your family and friends chip in. Then a nickel of that dime goes to Kickstarter’s exclusive money broker, Amazon.com, for processing the financials. So Kickstarter’s gross revenue is 5 percent of all the money brought in by all of our projects. (On four recent, celebrated, multimillion-dollar projects alone, Kickstarter brought in more than $1,175,000.) Since Internet infrastructure is relatively inexpensive, the costs of running a website that doesn’t produce or distribute any material goods is limited. The scaling up of web traffic doesn’t translate to an equal scale-up in costs, and at a certain point, costs max out, and profit skyrockets.

But, still, you keep 90 percent of what you raise, assuming you reach your goal. That’s a pretty good return on your efforts, right? MacPhee has doubts:

Well, say you run a campaign for $10,000—somewhere between a third to two-thirds of what a struggling artist might make in a year. You send out thousands of emails about your campaign, post it on dozens of friends’ Facebook pages, send out lots of tweets, talk it up with everyone you meet, and try to get as many people as possible to do the same. You’re a popular person living in a major city, with an active social network and a compelling project, so you hit your mark—$10,000 is pledged. Kickstarter and Amazon take 10 percent right off the top, so now you are down to $9,000. If the money is coming in to you as an individual, Kickstarter treats you like a self-employed contractor, so it’s on you to figure out your tax burden and pay it, likely at least another 15 percent, so now you’re at $7,650. For a $10,000 campaign, you will have around 200 donors, of whom 150 will want rewards. If your rewards are physical objects, and you were generous in your offerings (a good idea when raising money), you’re going to have to wrap 150 packages, all of which need shipping supplies and postage to get to their destinations. On average, you’re likely spending $8 per package, so that’s another $1,200 off your total; so now you’re at $6,450. Within a few weeks a third of the money you raised is gone, and you haven’t begun to spend it on the project you were raising it for.

Odds are, MacPhee continues, you’ve probably also asked some of your friends to help you out with your little fundraising drive. That’s more unpaid labor — maybe a lot of it. You and your pals may not think of what you’re doing as “unpaid labor” — except maybe as a labor of love — but that’s what it is to the small group of entrepreneurs and investors that’s scooping the 10 percent off the top of what is, in the aggregate, a very large pile of money:

Kickstarter and Amazon made a grand sitting back and watching you do all that work. This is money they made not only on your friends directly paying it, but also on using you to tap into a deep-seated belief in our culture that volunteering is an important social value. Kickstarter gets its rentier-style money, you get a small portion to fund your project, and everyone else gets to bask in the glow of how wonderful it was for them to participate. What could be more exciting to venture capitalists and CEOs than a way to make money that on the surface seems completely non-coercive and non-exploitative of the raw materials, labor, and consumers involved?

MacPhee goes on to argue that Kickstarter isn’t all that different from a Tupperware-style pyramid operation. It grows by infiltrating its members’ personal networks of friends and acquaintances, through which it recruits an ever growing number of project-launchers and project-funders eager to donate their time and their money to the cause. Maybe MacPhee, in formulating his lengthy indictment, is guilty of overreaching, of finding nefarious dealings in every nook and cranny of what is, at least at some level, a worthy, socially productive business. Then again, when you have venture capitalists and entrepreneurs selling charity to the masses, it’s probably a good idea to do a little digging, to see who’s doing the work and who’s getting the money.

The last invention

The human race’s “last invention,” wrote the British mathematician and original Singularitarian Jack Good in 1964, would be a machine that’s smarter than we are.  This “ultra-intelligent machine,” by dint of its ability to create even smarter machines, would, “unquestionably,” ignite an “intelligence explosion” that would provide us with innumerable new inventions, improving our lives in unimaginable ways and, indeed, assuring our survival. We’d be able to kick back and enjoy the technological largesse, fat and happy and, one imagines, immortal. We’d never again go bald or forget where we put the car keys.

That’s assuming, Good threw in, as a quick aside, that “the machine is docile enough to tell us how to keep it under control.” If the machine turned out to be an ornery mofo, then the shit would hit the fan, existentially speaking, We’d end up pets, or renewable energy sources. But the dark scenario wasn’t one that Good felt likely. If we could develop an artificial intelligence and set it loose in the world, the future would be bright.

Nearly fifty years have gone by and, though we’re certainly fat, we’re not particularly happy and we’re not at all immortal. Keys are mislaid. Hair falls out. Graves are dug and filled.

Worst of all, we’ve lost our optimism about the benevolence of that ultra-intelligent machine that we still like to think we’re going to build. The Singularity is nearer than ever — 40 years out, right? — and the prospect of its arrival fills us not with joy but with dread. Given our record in such things, it’s hard for us to imagine that the ultra-intelligent machine we design is going to be polite and well-mannered and solicitous — an ultra-intelligent Mary Poppins. No, it’s going to be an ultra-intelligent Snidely Whiplash.

So it comes as a relief to hear that Cambridge University is setting up a Centre for the Study of Existential Risk, to be helmed by the distinguished philosopher Huw Price, the distinguished astrophysicist Martin Reese, and the distinguished programmer Jaan Tallinn, one of the developers of Kazaa and Skype. The CSER will be dedicated to examining and ameliorating “extinction-level risks to our species,” particularly those arising from an AI-fueled Singularity.

In a recent article, Price and Tallinn explained why we should be worried about an intelligence explosion:

We now have machines that have trumped human performance in such domains as chess, trivia games, flying, driving, financial trading, face, speech and handwriting recognition – the list goes on. Along with the continuing progress in hardware, these developments in narrow AI make it harder to defend the view that computers will never reach the level of the human brain. A steeply rising curve and a horizontal line seem destined to intersect! …

It would be comforting to think that any intelligence that surpassed our own capabilities would be like us, in important respects – just a lot cleverer. But here, too, the pessimists see bad news: they point out that almost all the things we humans value (love, happiness, even survival) are important to us because we have particular evolutionary history – a history we share with higher animals, but not with computer programs, such as artificial intelligences. … The bad news is that they might simply be indifferent to us – they might care about us as much as we care about the bugs on the windscreen.

At this point in reading the article, my spirits actually began to brighten. An indifferent AI seemed less worrisome than a hostile one. After all, the universe is indifferent to us, and we’re doing okay. But then came the kicker:

just ask gorillas how it feels to compete for resources with the most intelligent species – the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival.

This threw me back into a funk — and it made me even more eager to see the Centre for the Study of Existential Risk up and running. Until, that is, I began to think a little more about those gorillas. If they haven’t had any luck in influencing a species of superior intelligence with whom they share an evolutionary history (that would be us), isn’t it a little silly to think that we’ll have any chance of influencing an intelligence beyond our own, particularly if that intelligence is indifferent to us and even, from an evolutionary standpoint, alien to us? My mood blackened.

Writing in the shadow of the bomb, Jack Good began his 1964 paper with this sentence: “The survival of man depends on the early construction of an ultra-intelligent machine.” We no longer fear the present as much as Good did, but neither are we able to muster as much confidence in our ability to shape the future to our benefit. As computers have become more common, more familiar, we’ve lost our faith in them. They’ve turned from Existential Hope to Existential Risk. When we imagine our last invention — the end of human progress — we sense not our deliverance but our demise. That may actually say more about what’s changed in us than what’s changed about the future.

The old, weird world

This is the best lead to a story I’ve read in a while:

Fifteen years after vultures disappeared from Mumbai’s skies, the Parsi community here intends to build two aviaries at one of its most sacred sites so that the giant scavengers can once again devour human corpses.

Sign me up.

Bringing smart, intelligent widgets to life

Bruce Sterling reads GE’s “chest-pounding, visionary” white paper on the “industrial internet” and bristles at a sentence: “The full potential of the Industrial Internet will be felt when the three primary digital elements—intelligent devices, intelligent systems and intelligent decision-making— fully merge with physical machines, facilities, fleets and networks.” He comments:

That sounds like everything got “intelligent” all of a sudden and only a stupid guy would fail to leap around with glee about the prospect. But that’s not how this prospect would actually look-and-feel should it be implemented.

Try describing it this way instead: “The full potential of the Industrial Internet will be felt when the three primary digital elements—algorithmic devices, algorithmic systems and algorithmic decision-making— fully merge with physical machines, facilities, fleets and networks.”

Now you’re talking about an entirely plausible world, where heavy industry is entirely infested with software on wireless broadband. Okay, fine. We all know what software is like, because everybody interacts with it all the time. Forget talking about jet engines that are “intelligent.” Start talking about jet engines running apps and swapping data. The jagged, crunchy outlines of an “Industrial Internet” get immediately obvious then.

I’m not sure I get the crunchiness of data zipping through the air, but I second Sterling’s sentiment. Let’s can the insulting marketingspeak of “smart” and “intelligent” and use some more precise adjectives. We’ll all end up feeling a good deal more intelligent.