All roads lead to Omaha

Is it just a coincidence that Google dumped the “Froogle” name for its shopping service immediately after shelling out a wallet-whomping $3.1 billion in cash for DoubleClick? As Marketwatch’s John Shinal notes, Sergey Brin and Larry Page’s veneration of Warren Buffet no longer extends to copying the Oracle of Omaha’s philosophy of buying companies and other assets on the cheap. Larry and Sergey are turning into Buffet’s prodigal sons. Froogle, they ain’t.

But there is one place where Google continues to display a Buffetian skin-flintiness: in the buildout of the company’s data center network. And that brings us back to Omaha. Mounting evidence suggests that the Omaha-Council Bluffs area, on the Nebraska-Iowa border, will be the site of yet another of Google’s mammoth server farms. According to a report in yesterday’s Des Moines Register:

Two Iowa lawmakers say a large Internet company is weighing whether to locate part of its operation near Council Bluffs – and the House is moving ahead with incentives for that company. “The best information we have: This could be at least a $600 million investment with 100 jobs at $50,000 to $130,000 per job or as much as 10 times that number,” said Rep. Phil Wise.

“$600 million investment” is the new code for “Dude, we’re getting a Google data center!” Google’s Lenoir, North Carolina, center, announced earlier this year involves a “$600 million investment.” Its Goose Creek, South Carolina, center, announced a few weeks ago, involves a “$600 million investment.” There’s also a “mystery plant” being built on an 800-acre site in Pryor, Oklahoma, that has Googlish fingerprints all over it. When it’s announced, I’m guessing it’ll involve a “$600 million investment.” (And let’s not forget the possible Google data center in Blythewood, South Carolina, with an estimated price tag of “between $200 million and $800 million.”)

Add up all those $600 millions, and you’re talking a whole lot of money. But if you look at the sites, you see a very frugal approach at work. The common threads are cheap land, cheap electricity, cheap labor, cheap water, cheap bandwidth, and rich tax breaks. An article in the Omaha World Herald describes some of the attractions of the region for a Google plant, and the list applies equally well to the other sites:

The Omaha-Council Bluffs metropolitan area would give a company like Google access to relatively inexpensive land and labor compared with other parts of the country … The metro area has a large network of fiber-optic cables, which came with Offutt Air Force Base … The potential Bluffs development is near several electrical grids from the Mid-American Energy plant in southern Council Bluffs. David Sokol, chairman and CEO of MidAmerican, said he doesn’t know what company may be coming, but he did say the Lake Manawa area has plenty of electrical redundancy … The industrial foundation has approached the Bluffs water works about dramatically increasing the amount of water available for the site. Water can be used as part of “chilled air” systems that keep computer servers and other heat-generating equipment cool.

Shinal argues, with good reason, that the purchase of DoubleClick was essentially a defensive move for Google. When Google builds data centers, in contrast, it’s playing offense. That hints at the strategy underlying Google’s frugal-prodigal bipolarism: Spend like drunken sailors when necessary to block competitors, but pinch pennies when building what the company hopes will be the foundation of its dominance in the long run. Prodigal frugality? Frugal prodigality? Let’s just call it Google Proogle.

Intuit’s cloudburst frustrates customers

Here’s further evidence of why, as more computing moves onto the web, a broad, shared computing grid is both necessary and inevitable. The servers that TurboTax-maker Intuit uses to process electronically filed tax returns were swamped yesterday as Americans rushed to get their returns in at the last minute. As the Washington Post reports:

A record number of returns from both individual taxpayers and accountants started causing delays early Tuesday in customers receiving online confirmation their tax returns were submitted successfully, he said. As the midnight filing deadline approached, the problem got worse. During times of peak demand, Intuit was processing 50 to 60 returns per second, [an Intuit spokesman] said …

Usually, it takes only a few minutes after hitting the submit button for TurboTax users to get a message indicating the transaction had gone through. By Tuesday evening, however, it was taking hours, [the spokesman] said. “If you are sitting there and just did your taxes and want to get assurance it’s been filed, it has to go into the queue,” he said. “We are processing as quickly as we can given the unbelievable demand and the last-minute demand. You can’t increase capability quickly enough to solve the problem for every single individual hitting the OK button.”

Intuit is now hoping that customers whose returns were not processed before midnight will not face IRS penalties.

To run its business with private, dedicated servers, Intuit needs to build its data centers with the capacity necessary to handle the extreme spike in traffic – the peak load – that comes on tax-filing day. Thge vast majority of that installed capacity will go unused most of the time. Multiply that low capacity-utilization rate across thousands of companies, and you get a good sense of the wastefulness inherent in the proprietary model of computing, particularly as companies have to handle rapidly fluctuating web traffic. The only way to do cloud computing efficiently is to share the cloud – to establish a broad, multitenant grid (or a number of them) that balances the loads of many different companies. Otherwise, it’ll be one cloudburst after another, and a whole lot of underutilized capital assets.

Google buys PowerPoint editor

Filling in a hole in its Google Apps suite, Google has acquired Tonic Systems, which provides a set of tools for the online editing, viewing, and sharing of presentations created with Microsoft PowerPoint. Tonic Systems describes itself as “Java PowerPoint Specialists.” Google says it will incorporate Tonic’s technology into a new presentation service that will be added this summer to Apps.

Tonic’s TonicPoint tools allow you to open a PowerPoint presentation with your web browser, edit it, add new slides to it, extract text and images from it, and save the edited version in various formats. What makes TonicPoint particularly interesting, in the context of Google’s ambitions, is that you don’t have to have a copy of PowerPoint installed on your PC to open and edit a PowerPoint file with the tools. You only need the file. You can, effectively, work in a Microsoft app without buying the Microsoft app.

As with Google Docs and Google Spreadsheets, Google seems to be designing Google Presentations as a hybrid complement/competitor to Microsoft’s Office applications. You first use them as add-on tools for manipulating and sharing Microsoft files online, and then, eventually, you find that you don’t need the underlying applications anymore. Google Apps, in other words, is designed not as an Office Killer but rather as a kind of Office Bodysnatcher. Google doesn’t want to fight the Microsoft apps head-on. It wants to get inside them, and slowly take them over.

Google has wiped the Tonic Systems’ web site clean, but, for the moment, a demo of the service is still running here. (Update: It’s gone.) Here are some screen shots:

tonicpoint3.jpg

tonicpoint2.jpg

tonicpoint1.jpg

Announcing “The Big Switch”

Connect the dots:

Thomas Edison

The Dalles

Tim Berners-Lee

Crowdsourcing

Virtual Machines

Marc Benioff

Automattic

Computer-Tabulating-Recording Company

Personalized Search

AdWords

Nikola Tesla

Increasing Returns to Scale

Optical Fiber

Ray Ozzie

Fisk Street Station

Department of Defense

CERN Grid

Web 2.0

Danny Hillis

Henry Adams

BitTorrent

ICANN

John von Neumann

Chad Hurley

Spacewar

World’s Columbian Exposition

Amazon Elastic Compute Cloud

Yochai Benkler

William Kemmler

Eric Schmidt

. . . and the future appears.

BIGSWITCHsmall.jpg

It’s coming.

Rules for warbots

The rapid advance of robotic weapons is beginning to stir some intriguing, and disturbing, questions about the future rules of war. The Register and New Scientist point to a presentation by John Canning, an engineer with the U.S. Naval Surface Warfare Center, who argues that we’ve come to an important juncture in the history of warfare in which military robots will increasingly have the ability to autonomously select and destroy targets without human guidance. “With regard to Armed Autonomous Systems,” Canning writes, “the critical issue is the ability for the weapon to discriminate a legal target.”

Up to now, he notes, there has been “a requirement to maintain an operator in the ‘weapons release’-loop to avoid the possibility of accidentally killing someone. [A human] operator is effectively ‘welded’ to each armed unmanned system for this purpose.” But this requirement for human control undermines the performance benefits and cost savings that can now be gained through “the employment of large numbers of armed unmanned systems.”

Canning argues that, when it comes to the use of sophisticated warbots, the military needs to establish clear rules of engagement. In particular, he recommends that machines should only be able to autonomously target other machines: “let’s design our armed unmanned systems to automatically ID, target, and neutralize or destroy the weapons used by our enemies – not the people using the weapons. This gives us the possibility of disarming a threat force without the need for killing them … In those instances where we find it necessary to target the human (i.e. to disable the command structure), the armed unmanned systems can be remotely controllable by human operators who are ‘in-the-weapons-control-loop.'” The ability to switch from an autonomous machine-killing mode to a human-directed people-killing mode should be built into military robots, he says.

The Register’s Lewis Page notes that there would seem to be some practical obstacles to imposing the targeting restrictions: “It isn’t really made clear how [the] rule could really be applied in these cases. Doppler radar is going to have trouble distinguishing between attacking manned jets and incoming missiles, for instance. Even if the two could be swiftly and reliably differentiated, adding a human reaction and decision period in an air-defence scenario may not be a survivable thing to do.” It’s a fair point – how exactly do you program a warbot to, as Canning puts it, “discriminate a legal target”? – but as we look ahead to the prospect of ever more sophisticated autonomous weapons, the questions Canning is raising seem like very good ones to ask.

Googleopoly

Whether or not Google’s planned acquisition of DoubleClick demands close antitrust scrutiny would seem to hinge on how broadly you define the ad market. To Microsoft, the combination will end up controlling “more than 80%” of the market for placing ads on third-party web sites. To AT&T, it raises the specter of one company becoming “the broker of advertising for anything moving on the internet.” Google counters that, when considered as part of the entire ad world, rather than just the online corner of it, Google and DoubleClick are just “small components of a much larger advertising market.”

But it’s when you look beyond advertising, to the broader economic ecosystem that’s coming to define the way traffic and money flow through the consumer internet, that the Google-DoubleClick deal becomes more interesting, and troublesome, from an antitrust perspective. Google is not only the dominant player in the ad-serving market (and would see its dominance expand greatly by adding DoubleClick’s dominant banner-ad business), but is also the dominant player in the web searching market, controlling somewhere between 48% and 64% of that business (depending on whose data you believe). It has also, through its recent YouTube acquisition, seized a dominant share of the burgeoning market for the delivery of video online. Combined with Google Video, YouTube controls 55% of that market, according to Compete, while its nearest competitor, MySpace, holds just 15%. Google’s dominance in all these areas, moreover, seems to be increasing, suggesting that all these markets may have winner-takes-all characteristics.

Such broad dominance across advertising, search, and content businesses – all complements of one another – gives Google rich opportunities for cross-subsidization that in turn provide it with enormous flexibility in cutting exclusive deals with other internet content and services businesses. Consider video serving, for instance. Once Google begins embedding ads in popular YouTube videos, it will be able to be very generous in offering other web publishers revenue-sharing deals for running YouTube videos on their sites – deals far more attractive than what other, narrowly focused video-hosting companies – Photobucket, say – could possible afford. Google can opportunistically give away any of its complements knowing that the other complements will benefit.

Is Google invulnerable? Of course not. It’s still a young company in a young industry, and it has plenty of opportunities to screw up or be taken by surprise. But Google does seem well on its way to creating a playing field that, no matter how you look at it, always tilts in its favor.

MIT announces Human 2.0

“The age of Human 2.0 is here,” proclaims the Massachusetts Institute of Technology in launching a new Media Lab initiative to create an improved human being. The “h2.0” program, which kicks off with a free symposium next month, already has a catchy slogan – New Minds, New Bodies, New Identities – and a cool logo that uses a green see-through head as the dot in h2.0:

h20logo.gif

“Now at the dawn of the 21st century,” says MIT, “a new category of tools and machines is poised to radically change humanity at a velocity well beyond the pace of Darwinian evolution.” The h2.0 program, which the university says is not only “dramatic and crucially important” but also “world shattering” (I would have thought that was a bad thing), “seeks to advance on all fronts to define and focus this scientific realignment.”

We’re definitely overdue for an upgrade – it seems like we’ve been stuck in Version 1.x for a few hundred thousand years, and that was after a beta that went on for freaking ever. Still, I think I’ll probably hold off until 2.01 or 2.02. I don’t want to be on the bleeding edge for this one.

I can’t wait to get that green see-through head, though.