Monthly Archives: December 2008

A prescription for smart pills

In response to the flood of prescription brain stimulants like Ritalin and Adderall on college campuses, a group of academics from Stanford, Harvard, Cambridge, Penn, and other schools say the time has come to allow such drugs to be prescribed to healthy people for “cognitive enhancement.” In a commentary published yesterday in Nature, they argue that such drugs, as well as future therapies like brain chips, should be viewed no differently than communications technologies or good sleep habits:

Human ingenuity has given us means of enhancing our brains through inventions such as written language, printing and the Internet. Most authors of this Commentary are teachers and strive to enhance the minds of their students, both by adding substantive information and by showing them new and better ways to process that information. And we are all aware of the abilities to enhance our brains with adequate exercise, nutrition and sleep. The [cognitive-enhancement] drugs just reviewed, along with newer technologies such as brain stimulation and prosthetic brain chips, should be viewed in the same general category as education, good health habits, and information technology — ways that our uniquely innovative species tries to improve itself.

They acknowledge but reject some of the more common ethical arguments that have been made against the prescription of smart pills:

Cognitive-enhancing drugs require relatively little effort, are invasive and for the time being are not equitably distributed, but none of these provides reasonable grounds for prohibition. Drugs may seem distinctive among enhancements in that they bring about their effects by altering brain function, but in reality so does any intervention that enhances cognition. Recent research has identified beneficial neural changes engendered by exercise, nutrition and sleep, as well as instruction and reading. In short, cognitive-enhancing drugs seem morally equivalent to other, more familiar, enhancements … Given the many cognitive-enhancing tools we accept already, from writing to laptop computers, why draw the line here and say, thus far but no further?

While recommending further study of the effects of cognition-enhancing drugs as well as the laws controlling their use, the authors, led by Henry Greely of Stanford Law School, “call for a presumption that mentally competent adults should be able to engage in cognitive enhancement using drugs.” They go further to suggest, in terms that seem almost Swiftian (Jonathan, not Tom), that the government should actively support the distribution and use of amphetamines and other types of brain-boosting drugs: “If cognitive enhancements are costly, they may become the province of the rich, adding to the educational advantages they already enjoy. One could mitigate this inequity by giving every exam-taker free access to cognitive enhancements, as some schools provide computers during exam week to all students. This would help level the playing field.”

That’s the economic playing field. I worry more, though, about the possibility of leveling the cognitive playing field, as institutionally supported programs of brain enhancement impose on us, intentionally or not, a particular ideal of mental function. In a long list of questions for further research, the authors make a glancing reference to this concern: “Do [these drugs] change ‘cognitive style’, as well as increasing how quickly and accurately we think?” Something tells me that once the idea of artificial brain “enhancement” becomes accepted, through writings like this Nature commentary, that question will end up being pushed aside. Will people worry about the subtleties of “cognitive style” if they sense that the person in the next dorm or office is getting an edge on them by popping smart pills?

So much for the Googley Treats

Last May, Google marked the opening of its new data center in Lenoir, North Carolina, by feting the local residents with a big “Googley Barbecue,” complete with “Googley Treats,” a “Meet-a-Googler” tent, and a “bouncy house in Google colors.”

They’re not bouncing in Lenoir this morning.

Ed Cone points to reports that Google, having hired only 50 of the 210 workers that it told state and local officials it would employ in Lenoir, is halting construction of its second server warehouse on the site and “has informed all construction workers, from engineers to laborers, that there won’t be any more work on the site for a while.” The company yesterday told the state it would not be collecting a $4.7 million Job Development Investment Grant that North Carolina had awarded it in 2006.

The jobs grant represents only a tiny portion of the state and local incentives that Google is receiving for the Lenoir center. “Tax breaks on electricity, property tax waivers and other concessions could still push Google’s incentives package over $250 million in the decades to come,” according to the Triangle Business Journal. Google has told the state that it plans to eventually complete construction of the center and meet its job-creation promises, but that as a result of “volatile economic conditions,” it has no idea when construction will start up again.

Coming on the heels of Google’s announcement last month that it was mothballing its new data plant in Oklahoma, the company’s decision to halt work in Lenoir is a clear sign that Google is “moderating the pace of its data center building boom,” writes Rich Miller of Data Center Knowledge. “Google spent $452 million on its infrastructure in the third quarter of 2008, which was its lowest investment in capital expenditures since the company began its data center construction effort in early 2007. The third quarter total was well below the record $842 million Google spent on its data centers in the first quarter.”

Clouds, it seems, are not recession-proof.

The trailer park is the computer

Microsoft is about to take trailer park computing, or, as The Register memorably dubbed it, white trash computing, to its logical and necessary conclusion. The company’s next generation of utility data centers will take the form of – you guessed it – trailer parks: sprawling, roofless parking lots in which all the components – server clusters, power units, security systems – will be prefabricated offsite, packed into containers or other types of “modules,” trucked in, and plopped down on the ground as needed. (No word on whether employees at the new centers will be required to wear wifebeaters or carry around 30-packs of Busch Light.)

In an extensive blog post, Microsoft’s top data-center guy, Michael Manos, lays out the details of what the company calls its “Gen 4” centers, which will become the cornerstones of its “hyper-scale cloud infrastructure” for at least the next five years. He writes:

If we were to summarize the promise of our Gen 4 design into a single sentence it would be something like this: “A highly modular, scalable, efficient, just-in-time data center capacity program that can be delivered anywhere in the world very quickly and cheaply, while allowing for continued growth as required.” [You can tell Manos is a real data-center guy because he’s under the impression that sentences don’t require verbs.] From a configuration, construct-ability and time to market perspective, our primary goals and objectives are to modularize the whole data center. Not just the server side … but the mechanical and electrical space as well. This means using the same kind of parts in pre-manufactured modules, the ability to use containers, skids, or rack-based deployments and the ability to tailor the Redundancy and Reliability requirements to the application at a very specific level.

The modularity of the systems will, in other words, allow the company to tailor the sophistication (and cost) of its infrastructure to the varying levels of service quality that users expect from different web apps, allowing reductions in capital costs, Manos says, of “20%-40% or greater depending upon class [of app].” Equally important, from a cost standpoint, the new design will allow the company “to deploy capacity when our demand dictates it” rather than “mak[ing] large upfront investments.” This underscores one of the core economic challenges that has faced every utility through history and will face the new computing utilities as well: the need to match capacity to demand on an ongoing basis to ensure that capital is used efficiently.

On the green side of things, Manos says he expects the open-air design of the centers to “completely eliminate the use of water [for cooling]. Today’s data centers use massive amounts of water and we see water as the next scarce resource and have decided to take a proactive stance on making water conservation part of our plan.” I may be reading too much into it, but I take this as a dig at Google, which up to now has sited its data centers in places where it has easy access not only to cheap electricity but to megagallons of water. In fact, I think Microsoft’s openness about how it builds its data smelters is meant to draw a contrast with Google’s hyper-secrecy. “By sharing [our plans] with the industry,” writes Manos, “we believe everyone can benefit from our methodology. While this concept and approach may be intimidating (or downright frightening) to some in the industry, disclosure ultimately is better for all of us.” Translation: Microsoft is all about sharing, while the Googlers are stingy and selfish. (That’s a nice PR twist, but I’m guessing Google would argue that the reason it’s more secretive is because it has more valuable stuff to hide.)

Oh yeah: there is the obligatory animated video, and it has a bitchin’ soundtrack:

<br /><a href="http://video.msn.com/video.aspx?vid=b4d189d3-19bd-42b3-85d7-6ca46d97fe40" target="_new" title="Microsoft Generation 4.0 Data Center Vision">Video: Microsoft Generation 4.0 Data Center Vision</a>

Roofless!