Monthly Archives: August 2006

Malware as a service

Web 2.0 is “a permissive society,” writes Chris Nuttall in today’s Financial Times, “where users borrow, append and mix their data freely with one another.” Free love, software style, can spur a ton of entrepreneurial creativity – a lot of cute offspring get bred really fast. But, as Nuttall notes, it’s an awfully good way to spread disease as well: “The linked-up, sharing, live-updating melting pot of web technologies that has been dubbed the second version of the web is proving fertile ground for infiltrators seeking to inject malicious code into the mix.” Malware, like other forms of software, is becoming a service, as viruses, worms and other nasties piggyback on the multitude of data exchanges that happen automatically and invisibly when users browse the web today.

We’ve already seen malware attacks or vulnerabilities crop up in Yahoo’s web mail service, Google’s RSS service, and MySpace’s core “friending” service, as Nuttall documents. And those are the big, sophisticated players. The biggest vulnerabilities lie in the myriad of smaller services popping up all over the place. It’s fairly easy to hack together an Ajax site, but it’s not so easy to hack together a secure Ajax site. As Nuttall writes, “many Web 2.0 startups are too small to be able to dedicate much time to security.” As services get mashed together and as data and code get shared, the consequences of sloppiness can get magnified quickly.

The problems will likely get worse in the near term, as the bad guys learn how to exploit weaknesses faster than the good guys learn how to avoid or fix them. Eventually, as always, computer security will end up being an unending cat-and-mouse game. Where Web 2.0’s vulnerabilities may have the biggest impact is in impeding the adoption of web-based productivity tools by corporations. It’s easy to criticize IT departments for being a barrier to employees’ experimentation with web-based services, but when malware brings a network down or compromises data, it’s the IT department whose neck is on the line. A corporation would be foolish if it didn’t give system security a higher priority than software experimentation. When it comes to securing Web 2.0 services, the onus has to be on the supplier, not the user.

Open source as metaphor

Ethan Zuckerman provides an extensive report on what seems to have been an extraordinarily illuminating Wikimania panel exploring the differences between the production of open source software and the production of Wikipedia. One thing that becomes clear from the discussion is how dangerous it is to use “open source” as a metaphor in describing other forms of participative production. Although common, the metaphor almost always ends up reducing the complex open-source model to a simplistic caricature.

The discussion also sheds light on a topic that I’ve been covering recently: Yochai Benkler’s contention that we are today seeing the emergence of sustainable large-scale production projects that don’t rely on either the pricing system or management structure. Benkler’s primary example is open source software. But panelist Siobhan O’Mahony’s description of the evolution of open source projects reveals that they have become increasingly interwoven with the pricing system and increasingly dependent on formal management structure:

She argues that the F/OSS [free/open source software] model has now matured, with formalized governance structures, and that it’s very useful to look at the non-profit foundations that have helped these projects deal with firms and a commercial ecosystem. Her interest comes in part from the “myth” of F/OSS – that we’re hackers, we don’t need marketing, we’re a meritocracy – that’s not what really happens, as most serious F/OSS contributors will tell you.

From 1993-2000, many F/OSS projects were self-governing, accepting volunteer contributions with most participants motivated by the cause, ideology and idealism. From 2000 – 2006, the majority of volunteers are sponsored by vendors, well-supported by in-kind donations of hardware, marketing and legal services. Most commercial-grade projects have incorporated as nonprofit foundations with formal governance structures. The foundations hold assets, protect projects from liability, and present project to the outside world, including brokering agreements with commercial firms.

Certainly, the idea of community is important to understanding the origins, structure, and development of the open source model, and many open source contributors are motivated by rewards that can’t be measured in dollars and cents. But it’s hard at this point to make the case that open source exists in some purified space outside the world of pricing and management.

UPDATE: Two excellent retorts to this post, one from Tim Bray, the other from Assaf at Labnotes. They argue, among other things, that in trying to counter an oversimplification about open source, I’ve made my own oversimplification. A point well taken.

MySpace friends Google

Google has ponied up nearly a billion smackeroos to be MySpace’s search and advertising partner, leaving MySpace’s former ad partner, Yahoo, in the lurch. One wonders if Google will refuse to run ads near MySpace’s naughty bits, as is its practice for other AdSense partners, or whether MySpace is a “friend with privileges.”

UPDATE: Eric Schmidt answers my question: ““We are not going to cover MySpace with ads.” I think that means that the naughty bits will remain uncovered. Which is a good thing, on many levels.

Just say “delete”

Jack Welch earned the nickname Neutron Jack by taking a neutron-bomb approach to downsizing GE’s organization: leave the facilities in place, but get rid of the employees. Time-Warner’s AOL division appears to be pursuing this same strategy, but with an interesting twist. Instead of targeting its employees, it’s targeting its customers.

First it alienated the Netscape faithful by turning their old home page into an American Idol version of a newspaper. (Civilians bombed in war vs. world’s hottest chili sauce: Vote Now!) Yesterday, it alienated the rest of its customers by making public a mountain of data about the searches they’ve been making. Although the customers’ names were stripped out, those in the know say there’s enough specificity in some of the search sets to tie them back to individuals – or at least to make educated guesses. I guess AOL figures that before it can attract users to the New AOL it has to cleanse itself of the clients of the Old AOL.

Seriously, though, there may be a silver lining in this big ugly cloud of leaked data. It will raise awareness of the fact that all the web searches we make, day after day, are stored neatly away in giant corporate data warehouses, where they can be sifted and analyzed by marketers and, if push comes to shove, government agents. And, further, it will raise awareness of the fact that it doesn’t necessarily have to be like that. There’s no reason why our search logs have to be saved. Andrew Orlowski suggests that we should push for regulations forcing search engines to erase keyword data immediately after a search is processed:

The only solution to the problem of data abuse – and it’s only an inadequate, and very partial answer – is to ensure the data isn’t there to abuse in the first place. If search engines were required to delete their users’ queries as soon as they were made, and to leave no trace, this would greatly diminish the dangers of false inference by law enforcement officials, health companies, banks, HMOs, and anyone else seduced by the lure of a faulty algorithm. Data that doesn’t exist is also less vulnerable to being stolen … If that takes a regulatory agency, to ensure search engines “Leave No Trace”, so be it.

Chimes in Scott Karp: “Clearly, our societal and legal infrastructure is not prepared to deal with the human mirror of the Internet. We need public debates. We need Congressional hearings. And, unfortunately, we probably need legislation.”

AOL’s own Jason Calacanis – svengali of the Netscape makeover, as it happens – suggests a different approach to the same end:

Frankly, I want us to NOT KEEP LOGS of our search data. Yep, you heard that right … we shouldn’t even keep this data. I know that’s crazy, but I learned this week that Wikipedia turned off their log files. They did this for tech reasons, but they now are keeping them off and not looking to solve the problem because of the huge upside of users knowing their searches on wikipedia DON’T EVEN EXIST! I think we should use this as a way to brand AOL Search: We don’t record your searches!

A search engine that guaranteed it wouldn’t save its search logs would certainly set itself apart from the pack – in a way that could well be appealing to a large number of people who value their privacy more than they value a flood of highly targeted advertisements and marketing pitches. I seriously doubt that AOL or any of the other big boys would take this route today, given the shape of their current businesses and revenue streams, but for a newcomer with a clean slate it could be a powerful pitch.

UPDATE: A comment to this post notes that you can search Google anonymously through Scroogle.org, which “scrapes” Google results and then serves them up through a Scroogle.org page. It provides a kind of buffer between your computer and the search engine’s computer. Scroogle.org also lets you search Yahoo anonymously.

Breaking news: Kids still bored

A new Los Angeles Times poll of teens and young adults finds that kids are about as unimpressed with Internet media as they are with traditional media. The poll’s results suggest, according to the paper

that the revolution in entertainment, media and technology for which many in Hollywood are already developing strategies has not yet taken hold. For example, respondents say that traditional sources such as television advertising and radio airplay still tend to drive their decisions about movies and music more than online networking sites. Those interested in keeping up with current events report a surprising interest in conventional news sources, especially local TV news. And although many see their computers as a perfectly good place to watch a TV show or a movie, there does not appear to be widespread desire to take in, say, “Spider-Man 3” on their video iPods.

The findings about news-gathering habits seem particularly interesting. Only 10% of teens and 11% of people in their early 20s said that they consider online news sites, including blogs, as their “best source” of news. That seems to further underscore something I wrote about a couple of days ago: that, contrary to popular assumptions, there doesn’t seem to be any substantial generational shift from mainstream media to the Internet when it comes to the news. Even for the young, the web looks like a supplement, not a replacement.

Quality control

The transformation of Wikipedia continued over the weekend as the top Wikipedians gathered for their annual Wikimania confab. Whereas last year’s Wikimania was held quietly at a German youth hostel, this year’s took place in the elite groves of Harvard University, was sponsored by Coca-Cola and other big corporations, and was attended by some of the tech blogosphere’s most-bearded eminences.

Wikipedia is Mainstream.

Wikipedia cofounder and prime mover Jimmy Wales set the new agenda in his keynote address, when he announced that the online encyclopedia’s focus would shift from quantity to quality. In a subsequent interview with the New York Times, Wales said that “there’s a sense in the English[-speaking] community that we’re going from the era of growth to the era of quality … That could mean quality control — making sure the information is accurate — and it could mean a clearer presentation, or more information.” That marks a striking change, at least in rhetoric, for Wales, who up until now has frequently cited the sheer quantity of Wikipedia’s entries as a measure of its success and who has resisted admitting that Wikipedia has become more controlled, and less open, as it has tried to improve its quality. In referring to a “new era,” Wales shows that he’s finally becoming comfortable in talking about the reality of Wikipedia’s evolution toward a more traditional editorial structure, where experts and editors take on a much more central role in shaping the product.

It remains to be seen exactly where Wikipedia will end up setting the balance between control and openness, and what level of quality it will achieve as a result. Clearly, there are tensions in the organization between those who desire to impose greater control and those who cherish the principle of openness on which the encyclopedia was founded. In what may be a sign of the depth of those tensions, the Times reports that “one member of the foundation’s board, Florence Nibart-Devouard, stormed out of a news conference because she had not been told about the announcement being made.”

But at least this year’s Wikimania helped set one thing straight: Quality is ultimately a function not of openness but of control. Quality doesn’t emerge naturally from below; it’s imposed willfully from above. It never hurts to be reminded that some truths remain truths even as fashions and technologies change.

Yo ho, yo ho

I’ve been having a little discussion about piracy with Tim O’Reilly over at his blog. O’Reilly argues that piracy (of digital goods) is on balance a good thing. I don’t disagree with that. As is the case with black and grey markets in general, digital piracy probably does provide some economic benefits, and it may even, as O’Reilly suggests, increase net revenues in a given market by increasing awareness of underpromoted goods (though I’d like to see hard evidence on that). But I think it’s critical to point out that piracy is only good so long as it’s bad. In other words, there have to be real costs (social, legal, technological) to taking the pirate route in order to prevent it from displacing too many sales. As soon as piracy becomes legitimate, the balance tips way over to the bad side. By lending legitimacy to piracy, O’Reilly may, unintentionally, end up promoting the ill effects of piracy and hence weakening his own argument.

A fascinating question is: What is the optimal cost for piracy? What’s the point at which, if you increase the penalties, you diminish beneficial awareness-building, and, if you decrease the penalties, you promote too much cannibalization of legitimate sales? I imagine economists have looked at this question for digital piracy. Anybody know of any studies?