Apple intros $13,000 PC

Apple today introduced an eight-core version of its Mac Pro computer, the top model of which, tricked out with 16 gigs of RAM and four 750-gig hard drives, will cost you more than thirteen grand. Didn’t anyone tell Jobs that these things are supposed to be commodities?

13k.jpg

(Monitor not included.)

UPDATE: Steve responds, in a way.

Amazon patents cybernetic mind-meld

As noted by Slashdot, Amazon.com was on March 27 granted a broad patent for computer systems that incorporate human beings into automated data processing – the type of cybernetic arrangement that underpins the company’s Mechanical Turk service. With Mechanical Turk, a software programmer can write into a program a task that is difficult for a computer to do but easy for a person to carry out, such as identifying objects in photographs. At the point in the program when the “human input” is required, the task is posted to Amazon’s Mechanical Turk website where people carry it out for a small payment. The human input is then funneled back to the computer running the program.

The patent, as Amazon describes it, covers “a hybrid machine/human computing arrangement which advantageously involves humans to assist a computer to solve particular tasks, allowing the computer to solve the tasks more efficiently.” It specifies several applications of such a system, including speech recognition, text classification, image recognition, image comparison, speech comparison, transcription of speech, and comparison of music samples. Amazon also notes that “those skilled in the art will recognize that the invention is not limited to the embodiments described.”

The patent, which reads like an instruction manual from a dystopian future, goes into great detail about how the system might work in evaluating the skills and performance of the “human operated nodes.” The system might, for example, want to classify the human workers according to whether they are “college educated, at most high school educated, at most elementary school educated, [or] not formally educated.” It also lays out an example of the system incorporating “multiple humans” to carry out a particular subtask, “each of the humans being identified as being capable of satisfying at least some of the associated criteria for the [subtask],” and then synthesizing a result from their combined inputs.

The patent would appear to be a brilliant hedging strategy by Amazon. There may come a time when humans are so busy carrying out menial tasks for computer systems that they have neither the time nor the money to buy books and other goods online. If so, Amazon can still look forward to earning hefty licensing fees on its patented system, which could emerge as the central engine of the post-human economy.

Dell heads for the cloud

Dell hasn’t yet rolled out its trailer computer, but this week it did make clear that it aims to compete with the likes of Sun, IBM and Rackable in supplying the heavy metal for the big-ass data centers of the future. On Tuesday, the company announced that it was setting up a Data Center Solutions unit to serve Internet companies and other firms that “are building out fleets of servers with power, thermal, and density issues of such a massive scale that they invite new approaches for optimization.” The new unit’s debut offering is dubbed the “Cloud Computing Solution,” which VP Forrest Norrod calls “a computing solution for ‘Hyperscale’ data center environments including ‘design-to-order’ hardware, planning and deployment expertise and customized services targeted to address key needs generally not met by traditional hardware vendors.”

Although Dell claims that the new “design-to-order” business is a logical “next step” from its traditional “build-to-order” operation, it’s hard to see much of a connection between the two. Designing and outfitting customized high-end data centers is about as far removed from cranking out cheap boxes as you can get. This is another clear sign that Dell is abandoning the old “don’t innovate; commoditize” mantra that served it so well for so many years. Having watched competitors like Hewlett-Packard eat away at its cost advantage in the generic PC and server markets, Dell is now trying to follow other hardware makers up-market into more specialized (and profitable) machines and services. That will require bigger investments in R&D and, in turn, a different kind of cost structure – as well as a new and very different positioning and image in the enterprise market. It’s a tough challenge.

There’s also the larger question of how big the “hyperscale computing” market will turn out to be. Will there be a lot of companies building next-generation data centers, or just a few giant Google-like utilities with a lot of purchasing clout (and a lot of homegrown engineering talent)? The supply side’s already getting crowded, and the demand side is still pretty, uh, cloudy.

I do have one recommendation for Dell, though. When it gets around to introducing its trailer computer, it should let the guys at its Alienware subsidiary design the sucker. I would like to see some Alienware trailers barreling down the highway.

UPDATE: Adding to Dell’s challenges, the company announced today, after the stock market closed, that an internal investigation into accounting irregularities revealed “a number of accounting errors, evidence of misconduct, and deficiencies in the financial control environment.”

UPDATE: Larry Dignan follows up with an interview with Forrest Norrod. Norrod doesn’t get into details, but he does give an intriguing hint that in the end this may turn out to be another commoditization play for Dell, as the design of “hyperscale” data centers evolves toward a few standard setups:

Here’s the [current] process: Big customer comes to Dell for data center services and gives the company its specification and infrastructure plans. From there, the customer’s plan is discussed to address everything from power supplies to processor requirements to cooling to software to network capabilities. “We go through discovery of requirements and constraints,” said Norrod. “We come back in about a month with a [custom hardware configuration]” …

The big question is whether this model could scale for Dell. Norrod acknowledges that there are “limits to scalability,” but over time these [hardware configurations] could be mass produced. Dell’s [Cloud Computing System] customers currently all have different solutions to data center design, but some commonality is emerging. For instance, a large Web company that is a Dell customer cooked up a data center design that looked a lot like what a financial services firm was attempting. “Today the equipment is relatively diverse, but there is some commonality. Classes of applications and infrastructure philosophies will wind up being common for approaches,” said Norrod.

Of course, isn’t that what Sun and Rackable are already doing with their trailers – creating common hardware components at the data-center level rather than the server level?

Showdown in the trailer park II

Greg Linden points to an excellent paper by Microsoft’s James Hamilton that puts trailer park computing into the broader context of the evolution of the data center in the emerging age of utility computing. Hamilton presented his paper at the Biennial Conference on Innovative Data Systems Research in January (PowerPoint slides also available). Hamilton’s ideas provide a complement and counterpoint to a presentation on the future of computing demand that Sun’s Greg Papadopoulos gave in February (pdf of slides).

Both Hamilton and Papadopoulos see a similar set of trends – notably, the growing popularity of software as a service, the growing demand for supercomputing in certain industries like pharmaceuticals and financial services, and the expansion of computer-intensive “Web 2.0” applications aimed at consumers – spurring demand for extremely powerful, extremely reliable, and extremely efficient data centers. The big question is: How will this demand be filled? What will turn out to be the default architecture for the utility data center? Will it be constructed out of cheap commodity components (a la Google and Amazon), or will it be constructed out of specialized utility-class machinery supplied by companies like Sun, Rackable, and 3PAR? (Google itself is rumored to have its own fleet of homemade containerized data centers.)

Papadopoulos, true to his Sun heritage, argues for the specialized model, while Hamilton, true to his Microsoft heritage, argues for the commodity model. But for the commodity model to work, contends Hamilton, the commodity components will need to be combined into a meticulously engineered system that achieves high power, reliability, and efficiency – a data center in a box, or, more precisely, a data center in a container. The portability of a containerized data center, Hamilton further argues, will allow for a utility operation to be based on a network of smaller data centers, deployed in many different places, which he believes will offer social and political advantages over large, monolithic centers. As he puts it:

Commodity systems substantially reduce the cost of server-side computing. However, they bring new design challenges, some technical and some not. The technical issues include power consumption, heat density limits, communications latencies, multi-thousand-system administration costs, and efficient failure management with large server counts. The natural tendency is towards building a small number of very large data centers and this is where some of the non-technical issues come to play. These include social and political constraints on data center location, in addition to taxation and economic factors. All diminish the appeal of the large, central data center model. Multiple smaller data centers, regionally located, could prove to be a competitive advantage.

To address these technical and non-technical challenges, we recommend a different granule of system purchase, deployment, and management … we propose using a fully-populated shipping container as the data-center capitalization, management, and growth unit. We argue that this fundamental change in system packaging can drive order-of-magnitude cost reductions, and allow faster, more nimble deployments and upgrades.

As Hamilton notes, the shift to viewing an entire data center as a “unit” of computing would be consistent with the ongoing shift from buying individual servers to buying racks of servers. He makes a good case that the standard shipping container provides the ideal form factor for this new computing unit:

Shipping containers are ideal for the proposed solution: they are relatively inexpensive and environmentally robust. They can be purchased new for $1,950 each, while remanufactured units range around $1,500. The units are designed to successfully transport delicate goods in extreme conditions and routinely spend weeks at a time on the decks of cargo ships in rough ocean conditions and survive severe storms. More importantly, they are recognized world-wide: every intermodal dispatch center has appropriate cargo handling equipment, a heavy duty fork lift is able to safely move them, and they can be placed and fit in almost anywhere. The container can be cost-effectively shipped over the highway or across oceans. Final delivery to a data center is simple using rail or commercial trucking. The container can be placed in any secure location with network capacity, chilled water, and power and it will run without hardware maintenance for its service life …

At the end of its service life, the container is returned to the supplier for recycling. This service life will typically be 3 years, although we suspect that this may stretch out in some cases to 5 years since little motivation exists to be on the latest technology.

If Papadopoulos and Hamilton take different views of the kinds of components – commoditized or specialized – that will be assembled into these boxes, they seem to be in agreement that what will be important is not the component but the entire system. If they’re right, then the the computer industry may, as I’ve suggested before, be going back to more of the mainframe model, where profits lie in high-end engineering rather than low-end assembly.

That remains to be seen. In the meantime, Hamilton’s paper is a must-read for anyone interested in the future of computing.

Showdown in the trailer park

Sun, which has its Blackbox containerized data center out on tour, is suddenly facing some tough competition in the burgeoning trailer park computing market. Rackable Systems is rolling out a copycat product called Concentro that may just outdo the original. Besides sporting a most excellent name – if Flash Gordon had a computer, it would be called Concentro – Rackable’s portable data center comes in a 40-foot shipping container, making Sun’s 20-foot model look downright wimpy, and it can be packed with 9,600 processing cores or 3.5 petabytes of storage. Best of all is the interior. Check it out:

concentro05_lg.jpg

Stick David Bowie in there, and you’ve got your moody sci-fi epic half made.

For more details, see articles by the Register’s Ashlee Vance and IT Jungle’s Timothy Prickett Morgan. Prickett Morgan notes that Concentro comes equipped with LoJack, just in case some meth-addled kid decides to hook it to the back of his Ram Charger and go for a joyride.

Born again again

LifeChurch.tv, the high-tech evangelical church, has opened a 16-acre campus in Second Life, reports New Scientist. Bobby Gruenewald, LifeChurch.tv’s Pastor-Innovation Leader, writes on his blog that the virtual church will feature on-demand video, free virtual t-shirts, a special “LifeKids” area for little avatars, and something called a “Mysecret.tv glass house.” Beyond providing a gathering place for the virtual faithful, LifeChurch.tv hopes to use its presence to help redeem Second Life. Writes Gruenewald: “I need to warn you that there is a huge problem on Second Life with porn and ‘virtual sex.’ It is one of several reasons we are there, but it is also something that you need to be on guard about.” To combat the licentiousness, LifeChurch.tv has invited the “Porn Pastors” from xxxchurch.com to set up a mission within the campus.

The establishment of virtual churches and congregations of avatars would seem to raise some knotty theological questions, which I’m not sure the LifeChurch.tv pastors have fully thought through. In creating virtual worlds, aren’t we usurping God’s role – and hence committing a heresy? Are avatars created in God’s image or our own? Can they be saved? Can they be damned? Does sin even exist in a virtual world? Is there a Second Afterlife?