The trailer park is the computer

Microsoft is about to take trailer park computing, or, as The Register memorably dubbed it, white trash computing, to its logical and necessary conclusion. The company’s next generation of utility data centers will take the form of – you guessed it – trailer parks: sprawling, roofless parking lots in which all the components – server clusters, power units, security systems – will be prefabricated offsite, packed into containers or other types of “modules,” trucked in, and plopped down on the ground as needed. (No word on whether employees at the new centers will be required to wear wifebeaters or carry around 30-packs of Busch Light.)

In an extensive blog post, Microsoft’s top data-center guy, Michael Manos, lays out the details of what the company calls its “Gen 4” centers, which will become the cornerstones of its “hyper-scale cloud infrastructure” for at least the next five years. He writes:

If we were to summarize the promise of our Gen 4 design into a single sentence it would be something like this: “A highly modular, scalable, efficient, just-in-time data center capacity program that can be delivered anywhere in the world very quickly and cheaply, while allowing for continued growth as required.” [You can tell Manos is a real data-center guy because he’s under the impression that sentences don’t require verbs.] From a configuration, construct-ability and time to market perspective, our primary goals and objectives are to modularize the whole data center. Not just the server side … but the mechanical and electrical space as well. This means using the same kind of parts in pre-manufactured modules, the ability to use containers, skids, or rack-based deployments and the ability to tailor the Redundancy and Reliability requirements to the application at a very specific level.

The modularity of the systems will, in other words, allow the company to tailor the sophistication (and cost) of its infrastructure to the varying levels of service quality that users expect from different web apps, allowing reductions in capital costs, Manos says, of “20%-40% or greater depending upon class [of app].” Equally important, from a cost standpoint, the new design will allow the company “to deploy capacity when our demand dictates it” rather than “mak[ing] large upfront investments.” This underscores one of the core economic challenges that has faced every utility through history and will face the new computing utilities as well: the need to match capacity to demand on an ongoing basis to ensure that capital is used efficiently.

On the green side of things, Manos says he expects the open-air design of the centers to “completely eliminate the use of water [for cooling]. Today’s data centers use massive amounts of water and we see water as the next scarce resource and have decided to take a proactive stance on making water conservation part of our plan.” I may be reading too much into it, but I take this as a dig at Google, which up to now has sited its data centers in places where it has easy access not only to cheap electricity but to megagallons of water. In fact, I think Microsoft’s openness about how it builds its data smelters is meant to draw a contrast with Google’s hyper-secrecy. “By sharing [our plans] with the industry,” writes Manos, “we believe everyone can benefit from our methodology. While this concept and approach may be intimidating (or downright frightening) to some in the industry, disclosure ultimately is better for all of us.” Translation: Microsoft is all about sharing, while the Googlers are stingy and selfish. (That’s a nice PR twist, but I’m guessing Google would argue that the reason it’s more secretive is because it has more valuable stuff to hide.)

Oh yeah: there is the obligatory animated video, and it has a bitchin’ soundtrack:

<br /><a href="http://video.msn.com/video.aspx?vid=b4d189d3-19bd-42b3-85d7-6ca46d97fe40" target="_new" title="Microsoft Generation 4.0 Data Center Vision">Video: Microsoft Generation 4.0 Data Center Vision</a>

Roofless!

Cloud as a feature

Microsoft has been touting its “software plus services” strategy for some time, but if you want see some of the most creative thinking about how to meld cloud services with traditional PC software you’d do well to look not at Microsoft but at Mathematica. Wolfram Research, which makes Mathemetica, a heavy-duty and widely used program for computation and modeling, announced last week that it will build “the cloud” into the latest version of the application, delivering utility computing as, in essence, a seamless feature of its software.

Wolfram is working with two partners, Nimbis Services and R Systems, who specialize in supercomputing – or “high performance computing” (HPC) as it’s often called today – to “enable the Mathematica cloud service to access many diverse HPC systems, including TOP500 supercomputers and the Amazon Elastic Compute Cloud [EC2].” The “service will provide flexible and scalable access to HPC from within Mathematica, simplifying the transition from desktop technical computing to HPC,” the company says.

A representative from Amazon Web Services explains how Mathematica will tap into EC2 and the benefits it can provide to users:

Mathematica is a true cloud service offering. They connect to Amazon’s Cloud from within Mathematica. So you can simply use all the powerful features of Mathematica and ask it to run it in the cloud. For example, you don’t need to buy a Digital Image Processing package to do image processing in-the-Cloud. It’s all bundled in.

The workflow is very simple to understand and it takes very few clicks to deploy your code in the cloud. A typical Mathematica user develops code in their standard notebook interface, a programming concept that defines their input code and output results, including graphics. The user specifies input cells, output cells and other parameters. Mathematica will evaluate one input cell at a time so evaluation could take a lot of time to process on one machine. Now, with the new Cloud service, users can evaluate the entire notebook in one shot by pushing it to the cloud.

The HPC Cloud Service lets users take the entire notebook, click a few buttons in the HPC Cloud Service GUI and ask it to run it in the cloud. The HPC Cloud Service evaluates the code, runs it in parallel Mathematica sessions, bundles up the results and notifies the user. In other words, a user can test the code (a Mathematica Notebook) with a small amount of input and then increase size of the input to a more realistic size, push it to the cloud so it runs on hundreds and even thousands of nodes in parallel, and get notified when its done.

We certainly need a nifty abbreviation for the merging of cloud services into traditional PC software, so let me suggest CaaF, for Cloud-as-a-Feature. Wolfram is by no means the first company to roll out a CaaF offering – Apple’s MobileMe incorporates lots of cloud features, and even Google Earth is kind of CaaFy – but Mathematica’s example shows most clearly how the cloud can be used to build powerful new features into existing programs. It’s a model that is sure to expand as programmers think more creatively about the cloud. “Imagine,” says the Amazon rep, “if you could do this with any software and simply click the ‘Run it in-the-Cloud’ button, run everything in parallel and get your results faster.”

I recently suggested that the cloud is most interesting as a means of doing new things with computers, rather than just doing old things in a new environment. By radically changing the economics of high-performance computing, the cloud democratizes the supercomputer. What Wolfram is doing points to one way that software companies can exploit those new economics.

Post-pixel

Where did the pixels go? Not so long ago, you couldn’t look at type on a computer without seeing the ghost of the screen’s pixel grid behind it. But as screen resolutions have improved (thank you, Moore’s Law), pixels have become at once far more plentiful and far less visible. Indeed, the pixel has all but disappeared. Look hard at what you’re reading: Can you see one?

Jason Kottke points to a fascinating little essay on the death of the pixel by type designer Jonathan Hoefler. Digital designers long struggled with the tension between graceful letterforms and the clumsy, pointillist grid on which they had to be rendered. But, as Hoefler points out, the struggle was nothing new. Hundreds of years ago, words were routinely embroidered on fabric, requiring the adaptation of type to a rough pixel grid. “Renaissance ‘lace books’ have much to offer the modern digital designer,” writes Hoefler, “who also faces the challenge of portraying clear and replicable images in a constrained environment.”

But, Hoefler goes on, that age-old challenge is going away: “Pixels were the stuff of my first computer, which strained to show 137 of them in a square inch; my latest cellphone manages 32,562 in this same space, and has 65,000 colors to choose from, not eight. Its smooth anti-aliased type helps conceal the underlying matrix of pixels, which are nearly as invisible as the grains of silver halide on a piece of film.” The pixel’s existence is “moribund,” and it’s only going to become more so: “Crisp cellphone screens aren’t the end of the story. There are already sharper displays on handheld remote controls and consumer-grade cameras, and monitors supporting the tremendous WQUXGA resolution of 3840×2400 are making their way from medical labs to living rooms.”

So what’s left for the once-mighty pixel? “It’s likely that the pixel’s final and most enduring role will be a shabby one, serving as an out-of-touch visual cliché to connote ‘the digital age.'”

That’s kind of touching. Honestly, I hardly even noticed that pixels had disappeared until I read Hoefler’s essay. And now I find myself a wee bit nostalgic for the little sons of bitches.

Xeroxing the brain

Anders Sandberg and Nick Bostrom, of Oxford’s Future of Humanity Institute, have published an in-depth roadmap for “whole brain emulation” – in other words, the replication of a fully functional human brain inside a computer. “The basic idea” for whole brain emulation (WBE), they write, “is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain.” It’s virtualization, applied to our noggins.

Though “currently only a theoretical technology,” WBE is, the authors say, “the logical endpoint of computational neuroscience’s attempts to accurately model neurons and brain systems” and “may represent a radical new form of human enhancement.” In something of an understatement, they write that “the economic impact of copyable brains could be immense, and could have profound societal consequences.”

The document is a fascinating one, not only in its comprehensive description of “how a brain emulator would work if it could be built and [the] technologies needed to implement it,” but also in its expression of an old-school materialist conception of the human mind (a conception that is in tension with some of neuroscience’s more interesting recent discoveries). The authors’ belief that it is, at least theoretically, possible to build a brain emulator “that is detailed and correct enough to produce the phenomenological effects of a mind” leads them, inevitably, to the issue of free will.

They deal with the problem of free will, or, as they term it, the possibility of a random or “physically indeterministic element” in the working of the human brain, by declaring it a non-problem. They suggest that it can be dealt with rather easily by “including sufficient noise in the simulation … Randomness is therefore highly unlikely to pose a major obstacle to WBE.” And anyway: “Hidden variables or indeterministic free will appear to have the same status as quantum consciousness: while not in any obvious way directly ruled out by current observations, there is no evidence that they occur or are necessary to explain observed phenomena.”

The only way you can emulate a person with a computer is by first defining the person to be a machine. The Future of Humanity Institute would seem to be misnamed.

Thank you, Ron Rosenbaum

Our Resident Philistine receives his due.

UPDATE: Writing in the most recent issue of the American Journalism Review, Paul Farhi provided a thoughtful assessment of what’s killing the newspaper trade. After reviewing the broad economic trends that are undermining the fortunes of the news business (both in print and online), he asks:

Could smarter reporting, editing and photojournalism have made a difference? Can a spiffy new Web site or paper redesign win the hearts of readers? Surely, they can’t hurt. But if we, and our critics, were realistic, we’d admit that much is beyond our control, and that insisting otherwise is vain. As British media scholar and author Adrian Monck put it in an essay about the industry’s troubles earlier this year: “The crops did not fail because we offended the gods.”

His piece provides a good counterweight to the nostrums of ORP and his ilk, which often seem to boil down to: “If we don’t kill journalism, it’ll die.”

UPDATE: On another related note, I thought Nick Denton’s recent advice to online media outfits was revealing:

Get out of categories such as politics to which advertisers are averse. That’s easier for us to say since we spun off Wonkette earlier this year. And outfits such as the Huffington Post and most big-city newspapers—defined by their political coverage—will have difficulty redefining themselves. But media groups cannot afford in the current environment to fund their most noble missions; they should leave that to public-spirited non-profits such as Pro Publica.

I sense a ruefulness under the surface – well under the surface – of Denton’s words. (His advice, incidentally, illustrates my own assessment of how the new economics of news will over the long run shape what’s covered.)

Your new BFF

“Scientists have created the first ‘humanoid’ robot that can mimic the facial expressions and lip movements of a human being,” reports today’s Daily Mail. The robot, named Jules, is, as the paper delicately puts it, “a disembodied androgynous robotic head.” (Which, come to think of it, is kind of what all of us become when we go online.)

Here’s how it works:

Human face movements are picked up by a video camera and mapped onto the tiny electronic motors in Jules’ skin. It can grin and grimace, furrow its brow, and “speak” as the software translates real expressions observed through video camera “eyes.” Jules then mimics the facial expressions of the human by converting the video image into digital commands that make the robot’s servos and motors produce mirrored movements. And it all happens in real time as Jules can interpret the commands at 25 frames per second.

But let’s cut to the video:

I think I know who’s going to give the keynote at next year’s Singularity Summit.

No problem

“Google is the answer to the problem we didn’t have,” says bookstar Malcolm Gladwell. “It doesn’t tell you what’s interesting or what’s important.”

Ah, but it does give you a snapshot of the consensus view of what’s important. Isn’t that good enough?