« The centripetal web | Main | This post will self-destruct in five minutes »

The cost of First Click Free

October 20, 2008

The web you see when you go through Google's search engine is no longer the web you see when you don't go through Google's search engine.

In a note on my previous post, The Centripetal Web, Seth Finkelstein points to Philipp Lenssen's discussion of a new Google service, called First Click Free, that the company formally unveiled on Friday. First Click Free allows publishers that restrict access to their sites (to paying or registered customers) to give privileged access to visitors who arrive via a Google search. In essence, if you click on a Google search result you'll see the entire page of content (your first click is free) and you will only come up against the pay wall or registration screen if you try to look at a second page on the site. As Google explains:

First Click Free is designed to protect your content while allowing you to include it Google's search index. To implement First Click Free, you must allow all users who find your page through Google search to see the full text of the document that the user found in Google's search results and that Google's crawler found on the web without requiring them to register or subscribe to see that content. The user's first click to your content is free and does not require logging in. You may, however, block the user with a login or payment or registration request when he tries to click away from that page to another section of your content site ...

To include your restricted content in Google's search index, our crawler needs to be able to access that content on your site. Keep in mind that Googlebot cannot access pages behind registration or login forms. You need to configure your website to serve the full text of each document when the request is identified as coming from Googlebot via the user-agent and IP-address. [emphasis added]

Now this is a helluva good business idea. (Google News has had a similar program in place for a while for newspaper sites, I believe.) It's good news both for publishers (who get an easy way to provide teaser content to potential customers) and for surfers (who get access to stuff that used to be blocked). But, as Lenssen points out, it marks a fairly profound change in the role that Google's search engine plays and, more generally, in the organization of the web:

There once was a time when Google search tried to be a neutral bystander, watching the web without getting too actively involved. There once was a time when Google instructed webmasters to serve their Googlebot the same thing served to a site’s human users. Now, Google is officially telling webmasters they can serve one thing to people coming from Google web search, and another thing to people coming from elsewhere ... Google’s organic results thus become not any view onto the web, but a special one. You may prefer this view – when using Google you’re being treated as a VIP, after all! – or dislike it. And it might force you to rely on Google even more than before if some publishers start creating one free website for Google users, and another free one for second-class web citizens.

Efforts splicing up the web into vendor specific zones aren’t new, though the technologies and specific approaches involved vary greatly. In the 1990s, “Best Viewed with Netscape” or “Optimized for Internet Explorer" style buttons sprung up, and browser makers were working hard to deliver their users a “special” web with proprietary tags and more. Many of us had strong dislikes for such initiatives because it felt too much like a lock-in: the web seems to fare better when it works on cross-vendor standards, not being optimized for this or that tool or – partly self-interested – corporation.

At the very least, First Click Free provides another boost to the web's centripetal force, as Google further strengthens the advantage that its dominance of search provides. Google doesn't like to think of itself as locking in users to its search engine, but if you get a privileged view of the web when you go through Google, isn't that, as Lenssen suggests, a subtle form of lock-in? Isn't Google's web just a little bit better than the traditional unmediated web?

Comments

A bit strange - When you're ready to make your 2nd click - just copy the text of the link, paste it in Google and go from there as your first click. Never register if you don't want to.

Good for Google but not much in it for publishers.

Posted by: Jim Mason [TypeKey Profile Page] at October 20, 2008 12:37 PM

A better approach would be to use metadata to mark content that is paid content, which all search bots could then serve up in an appropriate way -- a link to the site with a note in the search results that the site is paid content.

That's the more open approach, as well as the more honest.

Now, everybody is involved in selling the search user down the river: Google for showing the page and hooking the person in; the commercial sites for catering to one search engine's unique request.

Posted by: Shelley [TypeKey Profile Page] at October 20, 2008 05:49 PM

Hey Nick, this is not a new policy--I specifically mentioned that first-click-free (FCF) was totally fine to use for Google's web search at a large search conference back in June, for example.

I think FCF is a pretty well-balanced way that publishers can surface content that would normally require a subscription or payment, without the risks of cloaking. Cloaking (serving different content to users than to search engines) gives a poor user experience, and Google has removed magazines and newspaper content for cloaking in the past. In contrast, I think FCF is good for users, because they can get access to high-quality content that they would normally have to register to get. FCF is intended as a way for publishers to surface that content without cloaking.

So FCF represents Google's policy on how to surface subscription content without the risks of cloaking. I don't think Philipp asked other search engines what their policies are on premium content (e.g. is cloaking okay in their opinion? is first-click-free okay?). There's nothing exclusive about FCF--if Yahoo or Microsoft wanted to go with a similar policy, they certainly could. I hope that they do, because it would allow more users to access more content from the deep/invisible web that doesn't get surfaced right now.

Posted by: Matt Cutts [TypeKey Profile Page] at October 20, 2008 05:50 PM

Seems as though this policy may also be fairly insecure.

Are there any methods in place to prevent a user from just setting their user agent to Googlebot and browsing a locked-down site authentication free?

Posted by: Chris Dary [TypeKey Profile Page] at October 20, 2008 06:58 PM

Nick, I'm more critical of Google than most, but i have to agree with Matt here. FCF is essential for supporting "freemium" models of content distribution, which I much prefer to the prospect of all content on the web being either ad-supported or completely firewalled.


Digital libraries like the ACM and HighBeam take a similar approach, providing you full-text search to help you find what you want, but then charging you (either through subscription or a la carte) to download the full article text.


As Matt says, there's nothing unique to Google here. At least as long as Google doesn't seek exclusive rights for its bots.

Posted by: Daniel Tunkelang [TypeKey Profile Page] at October 20, 2008 07:43 PM

Chris Dary, the short answer is that people can't pretend to be Googlebot because they can't come from Google's IP address range. Google crawls from a small set of IP addresses and we provide a specific way that you can tell a user from Googlebot. Given an IP address, you do a reverse DNS lookup to find a hostname on the googlebot.com domain, then do a forward lookup on that hostname to verify the IP address. You can read more about how to verify that a bot is really from Google here: http://googlewebmastercentral.blogspot.com/2006/09/how-to-verify-googlebot.html

Posted by: Matt Cutts [TypeKey Profile Page] at October 20, 2008 09:36 PM

Matt,

Gotcha - I'm glad it's not as obvious as setting the user agent. Let's hope that developers are taking these recommendations to heart.

Posted by: Chris Dary [TypeKey Profile Page] at October 20, 2008 10:05 PM

FCF is essential for supporting "freemium" models of content distribution, which I much prefer to the prospect of all content on the web being either ad-supported or completely firewalled.

Daniel, I agree with you (and Matt). As I wrote, the "first click free" practice can benefit both publishers and readers. Indeed, when I've come up against, for instance, the Wall Street Journal's paywall, I've often gone to Google News, looked up the article, and clicked through to see the full text. Great! Me like! But that also illustrates my other point (and one of Lenssen's points): that this provides Google with an advantage that springs directly from its dominance of the search world. Yes, publishers can open their protected sites to other search engines as well, but they have far less motivation to do so than they have with Google given Google's unmatched power to deliver traffic. So by doing good, Google also creates a new barrier to competition and makes the Net a little more centripetal and a little less centrifugal.

And, frankly, I'm less than convinced by Matt's/Google's argument that this doesn't fall within the spirit of the definition of "cloaking" as previously defined by Google. Google's quite good at dancing around its old, idealistic proclamations when it suits its present purposes.

Nick

Posted by: Nick Carr [TypeKey Profile Page] at October 20, 2008 10:30 PM

I thought drawing similarities between FCF and the browser battles of yesteryear to be interesting. I think it's obvious that FCF has real potential for good... as previously stated, but I'm wondering if it will pan out as expected. I mean, Microsoft probably thought they were doing everyone a favor by introducing the BLINK tag- and I doubt I'll ever fully recover from it.

It might be one of those things where you just have to wait and see what happenes; but for me it raises lots of questions. Will it be used as intended? How will it be abused? By users, publishers, and even Google? Even assuming that it can't/won't be exploited, it still strives to divide up the web into Googles and Googles-not... and then, of course, when everyone else jumps on the bandwagon, it'll boil right back down to battle of the titans.

Who loses the battle? The developers. It's already troublesome enough sometimes to develop sites with any sort of interesting features when you have backward compatibility mandates and such- not to mention accessibility and whatever's next in line... but then you'll have to modify the business logic code with piles of if-blocks for each of the search engines you want to deliver freemium content to. And of course, Yahoo!, MSN, and whatever other search engines decide to play along won't want to do things "the Google way" as they need to one-up them... all I see in my future is more BLINK tags to emulate in CSS/Javascript (figuratively speaking of course) to make the business people happy.

Posted by: Beril the Dwarf [TypeKey Profile Page] at October 21, 2008 05:16 AM

Hi Nick,

I really can't see whats the problem with all this content becoming available to us...

I don't see how this is a "lock-in" or rather, even if it is a "lock-in" how is this BAD for me as a user, or for the participating publishers...

http://the-anti-google-baloney.blogspot.com/2008/10/happy-prisoner.html

Ps: the Big Switch is probably one of the best books I have read... great stuff)

Posted by: alex [TypeKey Profile Page] at October 21, 2008 12:19 PM

Gotcha - I'm glad it's not as obvious as setting the user agent.

No, but it's equally easy: just fake the referrer string in the browser, and set it to something like "...google.com/search?q=foobar". Of course this only solves the problems on your end and the link will still break for others when you send it to friends, blog about it, post it in a forum etc.

Posted by: Philipp [TypeKey Profile Page] at October 21, 2008 07:14 PM

Post a comment

Thanks for signing in, . Now you can comment. (sign out)

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)


Remember me?


carrshot5.jpg Subscribe to Rough Type

Now in paperback:
shallowspbk2.jpg Pulitzer Prize Finalist

"Riveting" -San Francisco Chronicle

"Rewarding" -Financial Times

"Revelatory" -Booklist

Order from Amazon

Visit The Shallows site

The Cloud, demystified: bigswitchcover2thumb.jpg "Future Shock for the web-apps era" -Fast Company

"Ominously prescient" -Kirkus Reviews

"Riveting stuff" -New York Post

Order from Amazon

Visit Big Switch site

Greatest hits

The amorality of Web 2.0

Twitter dot dash

The engine of serendipity

The editor and the crowd

Avatars consume as much electricity as Brazilians

The great unread

The love song of J. Alfred Prufrock's avatar

Flight of the wingless coffin fly

Sharecropping the long tail

The social graft

Steve's devices

MySpace's vacancy

The dingo stole my avatar

Excuse me while I blog

Other writing

Is Google Making Us Stupid?

The ignorance of crowds

The recorded life

The end of corporate computing

IT doesn't matter

The parasitic blogger

The sixth force

Hypermediation

More

The limits of computers: Order from Amazon

Visit book site

Rough Type is:

Written and published by
Nicholas Carr

Designed by

JavaScript must be enabled to display this email address.

What?