Category Archives: Uncategorized

Bondi Blue

I kid you not, in the late 1990s I actually paid good cash money for a Macintosh computer that looked like this:

556px-Power_Mac_G3_AIO_corrected.jpg

It was as ugly as Steve Ballmer’s ass. It weighed a million pounds. It was referred to as “The Molar.” That was not a term of endearment.

Apple Computer was dead. It was not “nearly dead,” as you’ll hear some say today. It was doornail dead. It was laid out on a slab in a Silicon Valley morgue, a tag hanging from its toe. Scott McNealy could have purchased the remains for an amount more or less equal to what he was spending on greens fees every month, but at the last moment he came to his senses and put his checkbook back into his pocket. The few pathetic fanboys left on the planet – myself among them – knew when we bought a new Mac that what we were really buying was a memento mori.

In 1996, the Apple board exercised the only option it had left, short of outright dissolution: it paid Steve Jobs to come back and take possession of the corpse. But it wasn’t until two years later that Apple released the first product of the second Jobs era. It was a quirky little computer called an iMac. It didn’t look anything like any other PC on the market. It had an outdated all-in-one design; pretty much every other desktop computer on the market had a CPU box and a separate monitor. It was egg-shaped, not square. It lacked a floppy drive. It had some sort of weird new port called USB. Most unusual of all, it was colorful. It was blue, not the ubiquitous beige of the typical PC box. And the color wasn’t just any run-of-the-mill blue. It wasn’t navy blue or sky blue or baby blue. The color of the computer actually had a name. It was called, Jobs let it be known, bondi blue.

218px-IMac_Bondi_Blue.jpg

Now, nobody in America knew what the hell a bondi was. We didn’t even know how you were supposed to pronounce it. Was it bond-ee blue, or was it bond-eye blue? But we were soon to learn that the color took its name from Bondi Beach, a long, curving strand just east of Sydney, Australia, popular with sunbathers and surfers. The color of the computer was intended to replicate the color of the sea in that particular part of the world.

Who other than Jobs would have thought, in the middle of the Age of Beige, to fabricate a computer in the color of the ocean off some beach in Australia? It was nuts. But the original iMac was the electroshock that kickstarted Apple’s heart. By no means did it secure the company’s future, and compared to the industry-altering products that were to follow – iPod, iTunes, iPhone, App Store, iPad – it seems like a fairly trivial product today. But it attracted a lot attention and, even more important, it bought Jobs time. It’s fair to say that, had it not been for the bondi blue iMac, those later products would never have appeared, at least not in their Jobsian form. Apple would have stayed dead, and Jobs would have probably headed off to be a player in Hollywood, maybe even the CEO of Disney.

All the other PC makers back then basically saw their computers as industrial tools. What they cared about – and what most buyers had been told to care about – was the specs of the innards, things like chip speed and hard drive capacity. Jobs sensed that there was in fact a set of computer buyers who might actually want a computer that was the color of the ocean off the coast of Australia – and not only that, but they that might well enjoy forking out a little extra money for the privilege of owning such a computer. A computer, Jobs saw, wasn’t just a tool. It was a fashion accessory. And as the guts of PCs continued to turn into commodities, his instinct was confirmed: it was the outside of the PC – the shape of it, the color of it, the look and feel of it – that came to matter. His insight resurrected Apple and killed the beige box.

Some years after the introduction of that first iMac, I had the opportunity to travel to Sydney, and the fanboy in me demanded that I make a pilgrimage to Bondi Beach. It was a chilly, overcast day, and other than a few joggers and maybe a surfer or two the beach was deserted. I took a picture:

bondi.jpg

Look at the color of the water where it hits the beach. That’s bondi blue. It may well have been Steve Jobs’s greatest invention.

The age of deep automation

Thanks to interconnected computers that are able to compute and communicate at incredibly low costs, we have entered a time of what I’ll call deep automation. The story of modern economies has always been a story of automation, of course, but what what’s going on today goes far beyond anything that’s happened before. We don’t know what the consequences will be, but the persistent, high levels of unemployment in developed economies may well be a symptom of deep automation.

In a provocative article in the new issue of the McKinsey Quarterly, W. Brian Arthur argues that computer automation has in effect created a “second economy” that is, slowly, silently, and largely invisibly, beginning to supplant the primary, physical economy:

I want to argue that something deep is going on with information technology, something that goes well beyond the use of computers, social media, and commerce on the Internet. Business processes that once took place among human beings are now being executed electronically. They are taking place in an unseen domain that is strictly digital. On the surface, this shift doesn’t seem particularly consequential—it’s almost something we take for granted. But I believe it is causing a revolution no less important and dramatic than that of the railroads … There’s no upper limit to this, no place where it has to end. Now, I’m not interested in science fiction, or predicting the singularity, or talking about cyborgs. None of that interests me. What I am saying is that it would be easy to underestimate the degree to which this is going to make a difference.

The computer system is, Arthur argues, “intelligent” in only the most basic sense of that word – intelligence defined as the ability of a thing to change its state in response to a stimulus. But, when spread across such enormous and enormously fast information-processing capacity, even that rudimentary degree of intelligence is enough to take over many traditionally human activities, even highly sophisticated ones: “Physical jobs are disappearing into the second economy, and I believe this effect is dwarfing the much more publicized effect of jobs disappearing to places like India and China.” So far, moreover, this new wave of automation, unlike the automation of manual labor during and after the industrial revolution, doesn’t seem to be creating large numbers of good new jobs to replace those it’s supplanting.

That means that, as a society, we now face a very different kind of economic challenge than we’ve faced in recent history:

The second economy will certainly be the engine of growth and the provider of prosperity for the rest of this century and beyond, but it may not provide jobs, so there may be prosperity without full access for many. This suggests to me that the main challenge of the economy is shifting from producing prosperity to distributing prosperity. The second economy will produce wealth no matter what we do; distributing that wealth has become the main problem. For centuries, wealth has traditionally been apportioned in the West through jobs, and jobs have always been forthcoming. When farm jobs disappeared, we still had manufacturing jobs, and when these disappeared we migrated to service jobs. With this digital transformation, this last repository of jobs is shrinking—fewer of us in the future may have white-collar business process jobs—and we face a problem.

Arthur is optimistic that we will be able to figure out a way to solve that problem, though the solution is by no means clear at this point. Distributing prosperity, as we’re seeing today, is not one of America’s traditional strengths – and, indeed, the entire idea is viewed with great suspicion. But if Arthur’s analysis is right – and if we don’t find a solution to the problem – Occupy Wall Street may be just a taste of what’s to come.

Whose book is it, anyway?

Even after I wrote a couple of posts about Amazon’s Kindle announcements last week, something still nagged me – I sensed there was an angle I was missing – and two nights ago it finally hit me. I woke from a fretful sleep and discovered a question pinballing through my synapses: What the heck does Kuzuo Ishiguro think about this?

Or, more generally: Whose book is it, anyway?

You might have thought that question was put to rest a few hundred years ago. For quite a while after Gutenberg invented the printing press, the issue of who controlled a book’s contents remained a fraught one. As is often the case, it took many years for laws, contractual arrangements, business practices, and social norms to catch up with the revolutionary new technology. But in due course the dust settled, and control over a book’s contents came to rest firmly in the hands of a book’s author (at least through the term of copyright). Which seems like the proper outcome. You probably wouldn’t, for instance, want book retailers to be able to fiddle with the text of a new book at their whim – that would be annoying, confusing, and wrong. And even if you did want it, it wouldn’t have been particularly practicable, as it would have required a retailer to invest in printing a special edition of the book or to have its employees go through every copy of the standard edition and mark it up with a Sharpie. Not only was authorial control over a text secured through laws and contracts, but it was also reinforced by the fact that printed books resisted easy emendation.

Case closed. Done deal. Everyone’s happy.

Until now.

At Amazon’s announcement last week, one of the things CEO Jeff Bezos introduced was the company’s new X-Ray feature – essentially a proprietary hypertext system for Kindle touchscreen ebooks. He demonstrated the feature by “X-Raying” Ishiguro’s acclaimed 1989 novel The Remains of the Day. With X-Ray, you tap on a page of a book, and you get a list of salient terms that appear on the page – character names, historical events, places, and so forth – along with a graph (an “X-Ray”) that indicates the frequency with which the terms are used throughout the book. You can then tap on a term to call up an explanatory article from Wikipedia (for glosses of facts) or Shelfari (for characters and other literary devices). To speed the hyperlinking process, Amazon does a technologically nifty trick: it bundles the relevant text from Wikipedia and Shelfari with the text of the book when it downloads the book to your Kindle. The company determines which supplementary text to include, as well as which terms to highlight, through a computerized textual analysis, which identifies what Amazon terms the “interesting phrases” in the book.

In one sense, X-Ray expands a feature that has been common in early ebook readers: the ability to call up a dictionary definition of a word. But X-Ray goes much further, both in augmenting the author’s original text and in integrating the additions into the reading experience. Some may see the additions as enhancements, others as irritants, but whether good or bad they represent an editorial intrusion into the contents of a book by a third party – a retailer, in this case. As such, they exist, I think it’s fair to say, in an ethical and perhaps legal gray area. That seems particularly true of novels, where the addition of descriptions of characters and other fictional elements would seem to intrude very much into the author’s realm. (I have to think X-Ray will make a lot of novelists nervous.) The fact that the supplementary text is sold along with the actual text makes the intrusion all the starker.

There are some obvious practical questions stemming from X-Ray, though I don’t see any evidence that Amazon or publishers have grappled with them yet:

Does the X-Ray system and its textual additions violate copyright controls or contractual arrangements?

Should Amazon be required to secure an author’s permission before X-Raying the author’s book? Should, in other words, X-Ray be opt-in? And if it’s not opt-in, should an author (or publisher) be able to opt-out?

Should an author be able to vet (or even add to) the supplementary information included with a book?

If, eventually, product recommendations or advertisements are included in the supplementary material triggered by X-Ray, should the author share in any resulting revenues?

There are also more theoretical questions, having to do with the aesthetics of literature, the integrity of works of art and craft, and the ethics of writing and reading.

I suspect that all these questions, and other related ones, will only become more salient and more complicated in the years ahead. Should X-Ray prove to be even a modest competitive advantage to the Kindle (or to Shelfari, which is owned by Amazon), we can expect other companies that provide e-readers or e-reading applications – Apple and Barnes & Noble, for instance – to introduce their own proprietary systems for amending and augmenting the text of a book. And we can expect Amazon to continue to extend the functionality of X-Ray. The intrusions onto the author’s traditional territory will only grow, and go deeper.

So whose book is it? Suddenly, that’s an open question again.

Matter-eater lads

Now here’s a sight for sore eyes: Guided by Voices in its original (more or less) lineup recording Let’s Go Eat the Factory, its first new album since 1996’s Under the Bushes Under the Stars, in a basement rec room, with Robert Pollard singing in a doorway and bass player Greg Demos monitoring the TASCAM four-track cassette recording deck while sitting in a chair that appears to have been stolen from a kindergarten:

gbv.jpg

Pollard is one of the great American artists of the past 50 years, though I suspect it will be another 50 years before that begins to be acknowledged.

matter_eater_lad1.jpg

There are more things in heaven and earth, Lightning Boy, than are dreamt of in your philosophy.