Google dives into subconscious marketing

Google believes that the effectiveness of the transparent InVideo advertisements that it has begun running on YouTube clips cannot be measured by traditional criteria like click-through rates. Instead, you have to get inside viewers’ brains, literally, and monitor things like “emotional engagement” and “memory retention” and “subconscious brand resonance.” Teaming up with the neuromarketing firm NeuroFocus and the branding consultancy MediaVest, Google conducted a study in which it measured people’s nervous-system responses – through brain-scanning skull sensors, eye tracking, pupil dilation, and galvanic skin response – as they watched YouTube ads.

The study, which involved 40 participants, found that “InVideo ads scored above average on a scale of one to 10 for measures like ‘attention’ (8.5), ’emotional engagement’ (7.3) and ‘effectiveness’ (6.6),” reports Mediaweek. “According to officials, a 6.6 score is considered strong.” Explains Google’s Leah Spalding: “Standard metrics don’t tell the whole story [about InVideo ads]. Google is an innovative company, and we want to embrace innovative technology … These ads require an approach that is more technologically sensitive.”

In a presentation on the study yesterday, the researchers described how they included a test of “brainwave response to brand logos.” It found that “subconscious brand resonance” strengthened considerably when InVideo overlays were added to traditional banner ads. “Even a single exposure to an InVideo ad boosts subconscious brand awareness from moderate to strong,” they reported.

googlebrain.jpg

As the Google study indicates, market research is entering a brave new world. Armed with the tools of neuroscience, marketers are shifting from measuring people’s conscious reactions, which are frustratingly unreliable, to measuring their subconscious responses. Of course, once you understand the determinants of those subconscious responses, you can begin to manipulate them. But marketers would never go that far, would they?

And now for the enterprise …

For Amazon.com’s utility-computing operation, Amazon Web Services, 2009 will be a crucial year, as the company is looking to expand beyond its traditional customer base of web developers and other relatively small-scale operators and push its services into the heart of the enterprise market. AWS is hoping to capitalize on the current economic climate, where cash, suddenly, is in short supply, to convince larger companies to begin shifting their computing requirements out of their own data centers and into the cloud, transforming IT from a capital expense to a pay-as-you-go operating expense. Amazon CTO Werner Vogels makes the pitch explicitly in a post on his blog today:

These are times where many companies are focusing on the basics of their IT operations and are asking themselves how they can operate more efficiently to make sure that every dollar is spent wisely. This is not the first time that we have gone through this cycle, but this time there are tools available to CIOs and CTOs that help them to manage their IT budgets very differently. By using infrastructure as a service, basic IT costs are moved from a capital expense to a variable cost, building clearer relationships between expenditures and revenue generating activities. CFOs are especially excited about the premise of this shift.

Beyond the marketing push, Amazon is rushing to make its services “enterprise-ready” at a technical level. It has announced today that its computing service, Elastic Compute Cloud, or EC2, is officially out of beta and operating “in full production” (whatever that means). It is also now offering a service-level agreement, or SLA, for EC2, guaranteeing that the service will be available 99.95% of the time. And, as previously announced, EC2 now supports virtual machines running Windows as well as Linux.

Equally important, Amazon has announced plans to beef up AWS’s management controls during the coming year, an essential step if it’s to entice big companies to begin shifting mainstream applications into Amazon’s cloud. It says it will offer four new or expanded capabilities in this regard:

Management Console – The management console will simplify the process of configuring and operating your applications in the AWS cloud. You’ll be able to get a global picture of your cloud computing environment using a point-and-click web interface.

Load Balancing – The load balancing service will allow you to balance incoming requests and traffic across multiple EC2 instances.

Automatic Scaling – The auto-scaling service will allow you to grow and shrink your usage of EC2 capacity on demand based on application requirements.

Cloud Monitoring – The cloud monitoring service will provide real time, multi-dimensional monitoring of host resources across any number of EC2 instances, with the ability to aggregate operational metrics across instances, Availability Zones, and time slots.

Amazon is not the only company that sees a big opportunity to expand the reach of cloud computing in the coming months. Yesterday, the hosting giant Rackspace announced a big expansion of its cloud computing portfolio, acquiring two cloud providers, Slicehost, a seller of virtual computing capacity, and Jungle Disk, which offers web-based storage, and next week Microsoft is expected to announce an expanded set of cloud-computing services that will compete directly with Amazon’s.

While it’s true that the economic downturn will provide greater incentives for companies to consider cloud services as a means of reducing or avoiding capital expenditures, that’s not the whole story. Companies also tend to become more risk-averse when the economy turns bad, and that may put a brake on their willingness to experiment with cloud services. Amazon’s moves today – ditching the beta label, offering service guarantees, promising more precise management controls, speaking to the CFO as well as the CIO – are intended not only to promote the economic advantages of cloud computing but to make the cloud feel “safer” to big companies. Whether it will succeed or not remains to be seen.

Remembering to forget

Slowly but surely, scientists are getting closer to developing a drug that will allow people to eliminate unpleasant memories. The new issue of Neuron features a report from a group of Chinese scientists who were able to use a chemical – the protein alpha-CaM kinase II – to successfully erase memories from the minds of mice. The memory losses, report the authors, are “not caused by disrupting the retrieval access to the stored information but are, rather, due to the active erasure of the stored memories.” The erasure, moreover, “is highly restricted to the memory being retrieved while leaving other memories intact. Therefore, our study reveals a molecular genetic paradigm through which a given memory, such as new or old fear memory, can be rapidly and specifically erased in a controlled and inducible manner in the brain.”

Technology Review provides further details on the study:

[The researchers] first put the mice in a chamber where the animals heard a tone, then followed up the tone with a mild shock. The resulting associations: the chamber is a very bad place, and the tone foretells miserable things. Then, a month later – enough time to ensure that the mice’s long-term memory had been consolidated – the researchers placed the animals in a totally different chamber, overexpressed the protein, and played the tone. The mice showed no fear of the shock-associated sound. But these same mice, when placed in the original shock chamber, showed a classic fear response. [The chemical] had, in effect, erased one part of the memory (the one associated with the tone recall) while leaving the other intact.

Fiddling with mice brains is one thing, of course, and fiddling with human brains is another. But the experiment points to the possibility of the eventual development of a precise and quick method for manipulating people’s memories:

“The study is quite interesting from a number of points of view,” says Mark Mayford, who studies the molecular basis of memory at the Scripps Research Institute, in La Jolla, CA. He notes that current treatments for memory “extinction” consist of very long-term therapy, in which patients are asked to recall fearful memories in safe situations, with the hope that the connection between the fear and the memory will gradually weaken.

“But people are very interested in devising a way where you could come up with a drug to expedite a way to do that,” he says. That kind of treatment could change a memory by scrambling things up just in the neurons that are active during the specific act of the specific recollection. “That would be a very powerful thing,” Mayford says.

Indeed. One can think of a whole range of applications, from the therapeutic to the cosmetic to the political.

Googley treats for Goose Creek

There are a few things we know about Google data centers:

1. They cost $600 million.

2. They employ 200 people.

3. They open with a down-home ribbon-cutting ceremony featuring politicians, oversized scissors, a local band, balloons, and a tent stocked with “Googley treats.”

The latest such hoedown was held on October 7 at Google’s new data center near Goose Creek in Berkeley County, South Carolina. In addition to the governor and the mayor, the event was attended by a passel of reporters and a lucky group of 50 local citizens who won a lottery for invitations. The Digitel has a video of the proceedings, and Heather of Lowcountry Bloggers offers a report:

I have lived in Berkeley County most of my life and was pleased to attend the ribbon cutting ceremony for Google’s new data center located between Goose Creek and Moncks Corner in Mt. Holly Business Park.

People have asked why Berkeley County? It all comes down to money and resources. South Carolina and Berkeley County officials were willing to negotiate; {local electric utility] Santee Cooper played a big role ensuring enough electricity would be available (at a reasonable rate) …

Attendees of today’s event were treated to bluegrass, food, live demonstrations of Google’s products, speeches, and of course the ribbon cutting.

There were a few curmudgeons in the audience, needless to say. Joshua Curry, of the Charleston City Paper, offered a particularly dyspeptic take on the celebration:

What a letdown … I was stoked to drive up there hoping to see the latest in high tech facilities, ie racks and racks of servers silently blinking and digesting data. I’ll admit it, I geek out on that kind stuff. Because I like to see how things work, no matter how banal it may seem to other eyes. The problem was, they didn’t let anybody inside. No photos, not even a peek. I asked at least five different people who were connected to that kind of access in some way and was politely told ‘no’, followed by a Google smile.

The Google smile is a happy go lucky California kind of vibe that cloaks a complete distance from the rest of the world. It says “I can’t tell you anything about what I do or see and my stock is vesting soon. Enjoy the festivities.” It seems taken directly from that Star Trek episode where everyone gets “absorbed”, “Are you of the body? Peace and tranquility to you.”

Total buzzkill. Clearly, Joshua Curry did not ingest a sufficient number of Googley treats.

But if you scroll down through Curry’s post, you’ll be treated to some sweet data-center porn, including photos of the center’s liquid cooling system and a row of backup generators that, writes Curry, “could probably power the whole county.”

Rich Miller, the Larry Flynt of data-center porn, has some more photos of the new center, which indicate that it has a different design than earlier Google centers. Noting that some of the ground floor appears to consist of a large undivided space open to the elements, Miller suggests that the center, like Google’s other new center, in Lenoir, North Carolina, may have been built to accommodate server-packed shipping containers. If Miller’s right, the Carolina plants would seem to mark a new generation of Google server farms.

This post will self-destruct in five minutes

It was all very hush-hush. On Saturday, September 20, 2008, a carefully selected group of the tech world’s best and brightest assembled in a windowless conference room at NASA’s Ames Research Center in Silicon Valley – barely a mile from the Googleplex as the rocket flies – to discuss preparations for our impending post-human future. This was the founding meeting of Singularity University, an academic institution whose mission, as founder Dr. Peter Diamandis told the elite audience, would be “to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies (bio, nano, info, etc); and to apply, focus and guide these to the best benefit of humanity and its environment.”

Also speaking that day were Ames Research Center Director Dr. S. Pete Worden, inventor and chief singularitarian Dr. Ray Kurzweil, Google founder and co-president Larry Page, Dr. Aubrey de Grey of the Methuselah Foundation, Dr. Larry Smarr of the California Institute for Telecommunications and Information Technology (his slides, misdated by a day, are here), Director of Cisco Systems Space and Intelligence Initiatives Rick Sanford, Dr. Dharmendra S. Modha of IBM’s Cognitive Computing Group, leading nanotechnologist Dr. Ralph Merkle, and artificial intelligence impresarios Bruce Klein and Susan Fonseca-Klein. Among the few dozen in the audience were Second Life’s Philip Rosedale, Powerset’s Barney Pell, and Wired editor Chris Anderson.

A photograph of the group – it kind of looks like the New Age wing of the military-industrial complex – has found its way into my hands, but for God’s sake don’t tell anyone you saw it here:

su.jpg

The day after the meeting, IBM’s Modha wrote a brief post about the event, but his words were quickly erased from his web site – not, however, before they were copied to the MindBroker site. “All in all,” wrote Modha, “a weekend day well spent in company of brilliant and sincere people trying to make a positive impact on the world!”

Modha’s post is one of the few public clues to the existence of Singularity University. (Another person who posted news of Singularity University was, he reports, “immediately contacted by people involved with the SU launch and asked [nicely and as a favor, nothing like cease and desist] to remove the post from the web archive, the reason being that the web sources quoted [not available anymore on the web, but still in Google cache and some blogs] had been posted without authorization and in breach of confidentiality.”) Attendees of the Ames meeting were asked to keep their lips zipped: “The Singularity University founding meeting and the details around the Singularity University are being held confidential until a public announcement is officially made. Please do not discuss or share this information publicly. Thank you in advance for your cooperation.” The last thing you want to do is frighten the humans.

The cost of First Click Free

The web you see when you go through Google’s search engine is no longer the web you see when you don’t go through Google’s search engine.

In a note on my previous post, The Centripetal Web, Seth Finkelstein points to Philipp Lenssen’s discussion of a new Google service, called First Click Free, that the company formally unveiled on Friday. First Click Free allows publishers that restrict access to their sites (to paying or registered customers) to give privileged access to visitors who arrive via a Google search. In essence, if you click on a Google search result you’ll see the entire page of content (your first click is free) and you will only come up against the pay wall or registration screen if you try to look at a second page on the site. As Google explains:

First Click Free is designed to protect your content while allowing you to include it Google’s search index. To implement First Click Free, you must allow all users who find your page through Google search to see the full text of the document that the user found in Google’s search results and that Google’s crawler found on the web without requiring them to register or subscribe to see that content. The user’s first click to your content is free and does not require logging in. You may, however, block the user with a login or payment or registration request when he tries to click away from that page to another section of your content site …

To include your restricted content in Google’s search index, our crawler needs to be able to access that content on your site. Keep in mind that Googlebot cannot access pages behind registration or login forms. You need to configure your website to serve the full text of each document when the request is identified as coming from Googlebot via the user-agent and IP-address. [emphasis added]

Now this is a helluva good business idea. (Google News has had a similar program in place for a while for newspaper sites, I believe.) It’s good news both for publishers (who get an easy way to provide teaser content to potential customers) and for surfers (who get access to stuff that used to be blocked). But, as Lenssen points out, it marks a fairly profound change in the role that Google’s search engine plays and, more generally, in the organization of the web:

There once was a time when Google search tried to be a neutral bystander, watching the web without getting too actively involved. There once was a time when Google instructed webmasters to serve their Googlebot the same thing served to a site’s human users. Now, Google is officially telling webmasters they can serve one thing to people coming from Google web search, and another thing to people coming from elsewhere … Google’s organic results thus become not any view onto the web, but a special one. You may prefer this view – when using Google you’re being treated as a VIP, after all! – or dislike it. And it might force you to rely on Google even more than before if some publishers start creating one free website for Google users, and another free one for second-class web citizens.

Efforts splicing up the web into vendor specific zones aren’t new, though the technologies and specific approaches involved vary greatly. In the 1990s, “Best Viewed with Netscape” or “Optimized for Internet Explorer” style buttons sprung up, and browser makers were working hard to deliver their users a “special” web with proprietary tags and more. Many of us had strong dislikes for such initiatives because it felt too much like a lock-in: the web seems to fare better when it works on cross-vendor standards, not being optimized for this or that tool or – partly self-interested – corporation.

At the very least, First Click Free provides another boost to the web’s centripetal force, as Google further strengthens the advantage that its dominance of search provides. Google doesn’t like to think of itself as locking in users to its search engine, but if you get a privileged view of the web when you go through Google, isn’t that, as Lenssen suggests, a subtle form of lock-in? Isn’t Google’s web just a little bit better than the traditional unmediated web?

The centripetal web

“A centripetal force is that by which bodies are drawn or impelled, or any way tend, towards a point as to a center.” -Isaac Newton

When I started blogging, back in the spring of 2005, I would visit Technorati, the blog search engine, several times a day, both to monitor mentions of my own blog and to track discussions on subjects I was interested in writing about. But over the last year or so my blog-searching behavior has changed. I started using Google Blog Search to supplement Technorati, and then, without even thinking about it really, I began using Google Blog Search pretty much exclusively. At this point, I can’t even remember the last time I visited the Technorati site. Honestly, I don’t even know if it’s still around. (OK, I just checked: it’s still there.)

Technorati’s technical glitches were part of the reason for the change in my behavior. Even though Technorati offered more precise tools for searching the blogosphere, it was often slow to return results, or it would just fail outright. When it came to handling large amounts of traffic, Technorati just couldn’t compete with Google’s resources. But it wasn’t just a matter of responsiveness and reliability. As a web-services conglomerate, Google made it easy to enter one keyword and then do a series of different searches from its site. By clicking on the links to various search engines that Google conveniently arrays across the top of every results page, I could search the web, then search news stories, then search blogs, then (if I was really ambitious) search scholarly papers. Google offered the path of least resistance, and I happily took it.

I thought of this today as I read, on Techcrunch, a report that people seem to be abandoning Bloglines, the popular online feed reader, and that many of them are coming to use Google Reader instead. The impetus, again, seems to be a mix of frustration with Bloglines’ glitches and the availability of a decent and convenient alternative operated by the giant Google. The first few comments on the Techcrunch post are revealing:

“switching temporary (?) to google reader, bloglines currently sucks too much”

“I got so fed up with bloglines’ quirks that I switched over to Google Reader and haven’t looked back”

“I’ve finally abandoned Bloglines for the Google Reader”

“Farewell, dear Bloglines. I loved you, but I’m going over to the dark side. I don’t love Google Reader, but at least I can get my feeds”

“Bloglines, please stop sucking. It’s been a couple of weeks now. I don’t want to have to move to Google Reader. Sigh.”

“Thanks for the tip about exporting feeds to Google Reader. I made the transition too. Goodbye Bloglines.”

During the 1990s, when the World Wide Web was bright and shiny and new, it exerted a strong centrifugal force on us. It pulled us out of the orbit of big, central media outlets and sent us skittering to the outskirts of the info-universe. Early web directories like Yahoo and early search engines like Altavista, whatever their shortcomings (perhaps because of their shortcomings), led us to personal web pages and other small, obscure, and often oddball sources of information. The earliest web loggers, too, took pride in ferreting out and publicizing far-flung sites. And, of course, the big media outlets were slow to move to the web, so their gravitational fields remained weak or nonexistent online. For a time, the web had no mainstream; there were just brooks and creeks and rills and the occasional beaver pond.

And that landscape felt not only new but liberating. Those were the days when you could look around and easily convince yourself that the web would always be resistant to centralization, that it had leveled the media playing field for good. But that view was an illusion. Even back then, the counterforce to the web’s centrifugal force – the centripetal force that would draw us back toward big, central information stores – was building. Hyperlinks were creating feedback loops that served to amplify the popularity of popular sites, feedback loops that would become massively more powerful when modern search engines, like Google, began to rank pages on the basis of links and traffic and other measures of popularity. Navigational tools that used to emphasize ephemera began to filter it out. Roads out began to curve back in.

At the same time, and for related reasons, scale began to matter. A lot. Big media outlets moved online, creating vast, enticing pools of branded content. Search engines and content aggregators, like Google, expanded explosively, providing them with the money and expertise to create technical advantages – in speed, reliability, convenience, and so on – that often proved decisive in attracting and holding consumers. And, of course, people began to demonstrate their innate laziness, retreating from the wilds and following the increasingly well-worn paths of least resistance. A Google search may turn up thousands of results, but few of us bother to scroll beyond the top three. When convenience meets curiosity, convenience usually wins.

Wikipedia provides a great example of the formative power of the web’s centripetal force. The popular online encyclopedia is less the “sum” of human knowledge (a ridiculous idea to begin with) than the black hole of human knowledge. At heart a vast exercise in cut-and-paste paraphrasing (it explicitly bans original thinking), Wikipedia first sucks in content from other sites, then it sucks in links, then it sucks in search results, then it sucks in readers. One of the untold stories of Wikipedia is the way it has siphoned traffic from small, specialist sites, even though those sites often have better information about the topics they cover. Wikipedia articles have become the default external link for many creators of web content, not because Wikipedia is the best source but because it’s the best-known source and, generally, it’s “good enough.” Wikipedia is the lazy man’s link, and we’re all lazy men, except for those of us who are lazy women.

Now, it’s true, as well, that Wikipedia provides some centrifugal force, by including links to sources and related works at the foot of each article. To its credit, it’s an imperfect black hole. But compared to the incredible power of its centripetal force, magnified by search engine feedback loops and link-laziness, its centrifugal force is weak and getting weaker. Which is, increasingly, the defining dynamic of the web as a whole. The web’s centrifugal force hasn’t gone away – it’s there in the deliberately catholic linking of a Jason Kottke or a Slashdot, say, or in a list of search results arranged by date rather than by “relevance” – but it’s far less potent than the centripetal force, particularly when those opposing forces play out at the vastness of web scale where even small advantages have enormous effects as they ripple through billions of transactions. Yes, we still journey out to the far reaches of the still-expanding info-universe, but for most of us, most of the time, the World Wide Web has become a small and comfortable place. Indeed, statistics indicate that web traffic is becoming more concentrated at the largest sites, even as the overall number of sites continues to increase, and one recent study found that as people’s use of the web increases, they become “more likely to concentrate most of their online activities on a small set of core, anchoring Websites.”

Chris Anderson’s “long tail” remains an elegant and instructive theory, but it already feels dated, a description of the web as we once imagined it to be rather than as it is. The long tail is still there, of course, but far from wagging the web-dog, it’s taken on the look of a vestigial organ. Chop it off, and most people would hardly notice the difference. On the web as off it, things gravitate toward large objects. The center holds.