A typology of network strategies

This week’s pissing match – I mean, spirited conversation – between Tim O’Reilly and me regarding the influence of the network effect on online businesses may have at times seemed like a full-of-sound-and-fury-signifying-nothing academic to-and-fro. (Next topic: How many avatars can dance on the head of a pin?) But, beyond the semantics, I think the discussion has substantial practical importance. O’Reilly is absolutely right to push entrepreneurs, managers, and investors to think clearly about the underlying forces that are shaping the structure of online industries and influencing the revenue and profit potential of the companies competing in those industries. But clarity demands definitional precision: the more precise we are in distinguishing among the forces at work in online markets, the more valuable the analysis of those forces becomes. And my problem with O’Reilly’s argument is that I think he tries to cram a lot of very different forces into the category “network effect,” thereby sowing as much confusion as clarity.

Ten years ago, we saw a lot of fast-and-loose discussions of the network effect. Expectations of powerful network effects in online markets were used to justify outrageous valuations of dotcoms and other Internet companies. Disaster ensued, as the expectations were almost always faulty. Either they exaggerated the power of the network effect or they mistook other forces for the network effect. So defining the network effect and other related and unrelated market-shaping forces clearly does matter – for the people running online businesses and the people investing in them.

With that in mind, I’ve taken a crack at creating a typology of what I’ll call “network strategies.” By that, I mean the various ways a company may seek to benefit from the expanded use of a network, in particular on the Internet. The network may be its own network of users or buyers, or it may be a broader network, of which its users form a subset, or even the entire Net. I don’t pretend that this list is either definitive or comprehensive. I offer it as a starting point for discussion.

Network effect. The network effect is a consumption-side phenomenon. It exists when the value of a product or service to an individual user increases as the overall number of users increases. (That’s a very general definition; there has been much debate about the rate of increase in value as the network of users grows, which, while interesting, is peripheral to my purpose.) The Internet as a whole displays the network effect, as do many sites and services supplied through the Net, both generic (email) and proprietary (Twitter, Facebook, Skype, Salesforce.com). The effect has also heavily shaped the software business in general, since the ability to share the files created by a program is often very important to the program’s usefulness.

When you look at a product or service subject to the network effect, you can typically divide the value it provides to consumers into two categories: the intrinsic value of the product or service (when consumed in isolation) and the network-effect value (the benefit derived from the other users of the product or service). The photo site Flickr has, for example, an intrinsic value (a person can store, categorize, and touch up his own photos) and a network-effect value (related to searching, tagging, and using other people’s photos stored at Flickr). Sometimes, there is only a network-effect value (a fax machine or an email account in isolation is pretty much useless), but usually there’s both an intrinsic value and a network-effect value. Because of its value to individual users, the network effect typically increases the switching costs a user would incur in moving to a competing product or service or to a substitute product or service, hence creating a “lock-in” effect of some degree. Standards can dampen or eliminate the network-effect switching costs, and resulting lock-in effect, by transforming a proprietary network into part of a larger, open network. The once-strong network effect that locked customers into the Microsoft Windows PC operating system, for instance, has diminished as file standards and other interopability protocols have spread, though the Windows network effect has by no means been eliminated.

Data mines. Many of the strategies that O’Reilly lumps under “network effect” are actually instances of data mining, which I’ll define (fairly narrowly) as “the automated collection and analysis of information stored in the network as a byproduct of people’s use of that network.” The network in question can be the network of a company’s customers or it can be the wider Internet. Google’s PageRank algorithm, which gauges the value of a web page through an analysis of the links to that page that exist throughout the Net, is an example of data mining. Most ad-distribution systems also rely on data mining (of people’s clickstreams, for instance). Obviously, as the use of a network increases, particularly a network like the Net that acts as a very sensitive recorder of behavior, the value of the data stored in that network grows as well, but the nature of that value is very different from the nature of the value provided by the network effect.

Digital sharecropping, or “user-generated content.” A sharecropping strategy involves harvesting the creative work of Internet users (or a subset of users) and incorporating it into a product or service. In essence, users become a pool of free or discount labor for a company or other producer. The line between data-mining and sharecropping can be blurry, since it could be argued that, say, the formulation of links is a form of creative work and hence the PageRank system is a form of sharecropping. For this typology, though, I’m distinguishing between the deliberate products of users’ work (sharecropping) and the byproducts of users’ activities (data mining). Sharecropping can be seen in Amazon’s harvesting of users’ product reviews, YouTube’s harvesting of users’ videos, Wikipedia’s harvesting of users’ writings and edits, Digg’s harvesting of users’ votes about the value of news stories, and so forth. It should be noted that while sharecropping involves an element of economic exploitation (with a company substituting unpaid labor for paid labor), the users themselves may not experience any sense of exploitation, since they may receive nonmonetary rewards for their work (YouTube users get a free medium for broadcasting their work, Wikipedia volunteers enjoy the satisfaction of contributing to what they see as a noble cause, etc.). Here again, the benefits of the strategy tend to increase as the use of the network increases.

Complements. A complements strategy becomes possible when the use of one product or service increases as the use of another product or service increases. As more people store their photographs online, for instance, the use of online photo-editing services will also increase. As more blogs are published, the use of blog search engines and feed readers will tend to increase as well. The iPhone app store encourages purchases of the iPhone (and purchases of the iPhone increase purchases at the app store). While Google pursues many strategies (in fact, all of the ones I’ll list here), its uber-strategy, I’ve argued, is a complements strategy. Google makes more money as all forms of Internet use increase.

Two-sided markets. Ebay makes money by operating a two-sided market, serving both buyers and sellers and earning money through transactional fees imposed on the sellers. Amazon, in addition to its central business of running a traditional one-sided retail store (buying goods from producers and selling them to customers), runs a two-sided market, charging other companies to use its site to sell their goods to customers. Google’s ad auction is a two-sided market, serving both advertisers and web publishers. There are a lot of more subtle manifestations of two-sided markets online as well. A blog network like the Huffington Post, for instance, has some characteristics of a two-sided market, as it profits by connecting, on the one hand, independent bloggers and, on the other, readers. Google News and even Mint also have attributes of two-sided markets. (Note that the network effect applies on both sides of two-sided markets, but it seems to me useful to give this strategy its own category since it’s unique and well-defined.)

Economies of scale, economies of scope, and experience. These three strategies are also tied to usage. The more customers or users a company has, the bigger its opportunity to reap the benefits of scale, scope, and experience. Because these strategies are so well established (and because I’m getting tired), I won’t bother to go into them. But I will point out that, because they strengthen with increases in usage, they are sometimes confused for the network effect in online businesses.

None of these strategies is new. All of them are available offline as well as online. But because of the scale of the Net, they often take new or stronger forms when harnessed online. Although the success of the strategies will vary depending on the particular market in which they’re applied, and on the way they’re combined to form a broader strategy, it may be possible to make some generalizations about their relative power in producing competitive advantage or increasing revenues or widening profit margins in online businesses. I’ll leave those generalizations for others to propose. In any case, it’s important to realize that they are all different strategies with different requirements and different consequences. Whether an entrepreneur or a manager (or an investor) is running a Web 2.0 business (whatever that is) or a cloud computing business (whatever that is), or an old-fashioned dotcom (whatever that is), the more clearly he or she distinguishes among the strategies and their effects, the higher the odds that he or she will achieve success – or at least avoid a costly failure.

Microsoft to offer Office-in-the-cloud

Microsoft’s long awaited push into cloud computing continues today, as the company announces plans to offer fully functional, if “lightweight,” versions of its popular Office applications as web services that will run in people’s browsers. The move signals Microsoft’s intention to defend its massive Office business against incursions from Google Apps, Zoho, and other online competitors. Versions of the apps will be available in both ad-supported and subscription models, according to Microsoft’s Chris Capossela:

We will deliver Office Web applications to consumers through Office Live, which is a consumer service with both ad-funded and subscription offerings. For business customers, we will offer Office Web applications as a hosted subscription service and through existing volume licensing agreements. We will show a private technology preview of the Office Web applications later this year.

Meanwhile, Google isn’t standing still. Yesterday, it announced that it would allow its Gmail users to embed features of its Google Docs word-processing application and its Google Calendars application into their email windows. This will aid the company in promoting its suite of Office substitutes to its large group of Gmail users.

The battle is joined. The outcome will be determined not only by whether Microsoft will be able to maintain its dominance of the Office market but also by whether it can maintain the outsized revenues and profits it has long enjoyed in that market.

Further musings on the network effect and the cloud

Tim O’Reilly, in a comment on my earlier post about how he overstates the importance of the network effect, writes: “… you failed to address my main point, namely that cloud computing is likely to be a low-margin business, with the high margin applications found elsewhere.”

Let me try to correct that oversight.

O’Reilly is here using “cloud computing” in the narrow sense of offering for-fee access to utility data centers for basic computing “infrastructure” encompassing compute cycles, data storage, and network bandwidth (a la Amazon Web Services or Windows Azure). I would definitely agree that this will be – and should be! – a low-margin business, as is generally the case with utility industries. (O’Reilly seems to dislike big low-margin businesses. Personally, I’m fond of them.) Success in a capital-intensive utility industry often hinges on maximizing usage in order to utilize your capital equipment as productively as possible; seeking high margins, by keeping prices high, can actually be self-defeating in that it can constrain usage and lead to suboptimal capacity utilization. I would also argue that the infrastructure side of cloud computing will likely come to be dominated by a relatively small number of firms that will tend to be quite large, which is quite different from the fragmented hosting business that O’Reilly believes will be the model for the infrastructure cloud.

Where I have a real problem with O’Reilly’s argument, though, is when he goes on to suggest that the low-margin characteristics of the cloud infrastructure business can be best explained by the lack of a strong network effect in that business. That’s balderdash. If you were to list the determinants of the profitability of the cloud infrastructure business, the lack of a strong network effect would be way down the list. O’Reilly appears to be suffering from a touch of tunnel vision here. The network effect is his hammer, and he’s looking for nails.

As to O’Reilly’s belief that at least some cloud applications will be relatively high-margin businesses (in comparison with running the infrastructure), I have no beef with that view. I would even be happy to agree that in some cases the network effect will be a source of those high margins. I would strongly disagree with O’Reilly’s idea that a strong network effect will be the only source, or the primary source, of high margins in the web app business. (“Ultimately, on the network, applications win if they get better the more people use them,” he declared. “As I pointed out back in 2005, Google, Amazon, ebay, craigslist, wikipedia, and all other Web 2.0 superstar applications have this in common.”*) There will be plenty of other potential paths to high margins: like creating a good, useful, distinctive software tool, for instance, or creating a strong brand, or achieving some form of lock-in (as horrible as that may sound).

Digression:

I note that today O’Reilly is expanding his definition of “network effect” far beyond his original definition of “applications that get better the more people use them.” He now dismisses his earlier definition as a “simplistic” definition, even though it’s the generally accepted definition. (As Liebowitz and Margolis explain, the network effect “has been defined as a change in the benefit, or surplus, that an agent derives from a good when the number of other agents consuming the same kind of good changes. As fax machines increase in popularity, for example, your fax machine becomes increasingly valuable since you will have greater use for it.”) If I were O’Reilly, I would also expand the definition of the term. After all, the broader you define “network effect,” the more phenomena you can cram under its rubric.

But, since O’Reilly continues to reject my contention that Google’s success cannot be explained by the network effect, let me defer to a higher authority: Hal Varian. Professor Varian is not only one of the smartest explicators of the network effect and its implications but is now a top strategist with Google. The following is an excerpt from a Q&A with Varian from earlier this year:

Q: How can we explain the fairly entrenched position of Google, even though the differences in search algorithms are now only recognizable at the margins? Is there some hidden network effect that makes it better for all of us to use the same search engine?

A: The traditional forces that support market entrenchment, such as network effects, scale economies, and switching costs, don’t really apply to Google. To explain Google’s success, you have to go back to a much older economics concept: learning by doing. Google has been doing Web search for nearly 10 years, so it’s not surprising that we do it better than our competitors. And we’re working very hard to keep it that way!

Yes, Google is adept at mining valuable information from the Net, and the value of that information tends to go up as more people use the Net. Yes, Google runs auctions that become more valuable as more traders join. Yes, web activity in general is a complement to Google’s core profit-making business. But that doesn’t change the fact that there’s little or no network effect in the use of Google’s search engine. The benefit I derive from Google’s search engine does not increase as more people use it. Period.

End of digression.

I think O’Reilly did a nice job of identifying the different layers of the cloud computing business – infrastructure, development platform, applications – and I think he’s right that they’ll have different economic and competitive characteristics. One thing we don’t know yet, though, is whether those layers will in the long run exist as separate industry sectors or whether they’ll collapse into a single supply model. In other words, will the infrastructure suppliers also come to dominate the supply of apps? Google and Microsoft are obviously trying to play across all three layers, while Amazon so far seems content to focus on the infrastructure business and Salesforce is expanding from the apps layer to the development platform layer. The degree to which the layers remain, or don’t remain, discrete business sectors will play a huge role in determining the ultimate shape, economics, and degree of consolidation in cloud computing.

Let me end on a speculative note: There’s one layer in the cloud that O’Reilly failed to mention, and that layer is actually on top of the application layer. It’s what I’ll call the device layer – encompassing all the various appliances people will use to tap the cloud – and it may ultimately come to be the most interesting layer. A hundred years ago, when Tesla, Westinghouse, Insull, and others were building the cloud of that time – the electric grid – companies viewed the effort in terms of the inputs to their business: in particular, the power they needed to run the machines that produced the goods they sold. But the real revolutionary aspect of the electric grid was not the way it changed business inputs – though that was indeed dramatic – but the way it changed business outputs. After the grid was built, we saw an avalanche of new products outfitted with electric cords, many of which were inconceivable before the grid’s arrival. The real fortunes were made by those companies that thought most creatively about the devices that consumers would plug into the grid. Today, we’re already seeing hints of the device layer – of the cloud as output rather than input. Look at the way, for instance, that the little old iPod has shaped the digital music cloud.

Today, we tend to look at the cloud through the eyes of the geek. In the long run, the most successful companies will likely be those that look at the cloud through the eyes of the consumer.

*UPDATE: I just realized that O’Reilly tempered this statement in his comment on my earlier post, writing, “I agree that I probably am overstating the case when I say that this is the only source of business advantage. Of course it isn’t.” Clarification accepted.

UPDATE: Meanwhile, Tim Bray warns against jumping to conclusions about the ultimate shape of the cloud based on what we’ve seen to date. Everything could change in an Internet minute: “Amazon Web Services smells like Altavista to me; a huge step in a good direction. But there are some very good Big Ideas waiting out there to launch, probably incubating right now in a garage or grad school.” The spreadsheet is to the PC as the _______ is to the cloud. Fill in the blank, and win a big prize.

Microsoft launches Windows Azure, its “cloud OS”

Having spent billions constructing a data center network over the last couple of years, Microsoft this morning launched, in limited “preview” form, Windows Azure, its platform for cloud computing. The announcement was made by Microsoft’s top software executive, Ray Ozzie, in a speech at the company’s Professional Developers Conference in Los Angeles.

Microsoft will use the Azure platform to run its own web applications and will also open the platform to outside developers for building and running their own apps. Azure will compete with other cloud platforms, such as Amazon Web Services, Google App Engine, and Salesforce.com’s force.com, and, given Microsoft’s enormous scale and influence in the software industry, its launch marks a milestone in the history of utility computing. The cloud is now firmly in the mainstream. Or, as Microsoft puts it: “The truth is evident: Cloud computing is here.”

The company describes Azure in this way:

Windows Azure is a cloud services operating system that serves as the development, service hosting and service management environment for the Azure Services Platform. Windows Azure provides developers with on-demand compute and storage to host, scale, and manage Web applications on the Internet through Microsoft data centers.

To build these applications and services, developers can use their existing Microsoft Visual Studio 2008 expertise. In addition, Windows Azure supports popular standards and protocols including SOAP, REST, and XML. Windows Azure is an open platform that will support both Microsoft and non-Microsoft languages and environments … Windows Azure welcomes third party tools and languages such as Eclipse, Ruby, PHP, and Python.

During its preview stage, Windows Azure will be available for free to developers. Once the platform launches commercially – and, according to Ozzie, Microsoft will be “intentionally conservative” in rolling out the full platform – pricing will be based on a user’s actual consumption of CPU time (per hour), bandwidth (per gigabyte), storage (per gigabyte) and transactions. The actual fee structure has not been released, though Ozzie says it will be “competitive with the marketplace” and will vary based on different available service levels.

More information can be found at Microsoft’s Azure site and in this technical white paper. Azure’s terms of service can be found here.

One question: Isn’t “azure” typically used to describe a cloudless sky?

What Tim O’Reilly gets wrong about the cloud

Technology publisher and Web 2.0 impresario Tim O’Reilly wrote a thought-provoking post today about the dynamics of the nascent cloud computing business. He makes some important and valid points, but his analysis is also flawed, and the flaws of his argument are as revealing as its strengths.

O’Reilly begins by taking issue with Hugh MacLeod’s contention that, thanks to “power laws,” “a single company may possibly emerge to dominate The Cloud, the way Google came to dominate Search, the way Microsoft came to dominate Software … We’re potentially talking about a multi-trillion dollar company. Possibly the largest company to have ever existed.”

O’Reilly argues that MacLeod mistakes the nature of power laws on the Net: “The problem with this analysis is that it doesn’t take into account what causes power laws in online activity. Understanding the dynamics of increasing returns on the web is the essence of what I called Web 2.0. Ultimately, on the network, applications win if they get better the more people use them. As I pointed out back in 2005, Google, Amazon, ebay, craigslist, wikipedia, and all other other Web 2.0 superstar applications have this in common.” O’Reilly goes on to argue that because many elements of cloud computing appear to lack this network effect – they don’t get better the more people use them – they won’t naturally evolve toward a monopoly or oligopoly. Here, he’s talking more about the infrastructure, or raw computing, services offered by, say, an Amazon Web Services and less about particular web apps.

Let’s stop here, and take a look at the big kahuna on the Net, Google, which O’Reilly lists as the first example of a business that has grown to dominance thanks to the network effect. Is the network effect really the main engine fueling Google’s dominance of the search market? I would argue that it certainly is not. And in fact, if you look back at that 2005 O’Reilly article, What Is Web 2.0?, you’ll find that O’Reilly makes a very different point about Google’s success. Here’s what he says, in a section of the article titled “Harnessing Collective Intelligence”:

Google’s breakthrough in search, which quickly made it the undisputed search market leader, was PageRank, a method of using the link structure of the web rather than just the characteristics of documents to provide better search results.

This has nothing to do with the network effect as O’Reilly defines it. What Google did was to successfully mine the “intelligence” that lies throughout the public web (not just within its own particular network or user group). The intelligence embedded in a link is equally valuable to Google whether the person who wrote the link is a Google user or not. In his new post, in other words, O’Reilly is confusing “harnessing collective intelligence” with “getting better the more people use them.” They are not the same thing. The fact that my neighbor uses Google’s search engine, rather than Yahoo’s or Microsoft’s, does not increase the value of Google’s search engine to me, at least not in the way that my neighbor’s use of the telephone network or of Facebook would increase the value of those services to me. The network effect underpins and explains the value of the telephone network and Facebook; it does not underpin or explain the value of Google. (Indeed, if everyone other than myself stopped using Google’s search engine tomorrow, that would not decrease Google’s value to me as a user.)

So why has Google’s search engine been able to steadily accumulate more and more market share at the expense of competitors? There are surely many reasons. Let me list several possible ones, all of which are likely more important than the network effect:

1. Google delivers (or in the past has delivered) superior search results as judged by users, thanks to superior algorithms, superior spidering techniques, or other technical advantages.

2. Google delivers (or in the past has delivered) results more quickly than its competitors (an important criterion for users), thanks to superior data processing systems.

3. Google has succeeded in establishing a strong brand advantage, in effect making its name synonymous with web search.

4. Google has, through partnerships, through the distribution of the Google toolbar, and through other means, made its search engine the default search engine in many contexts (and we know that users rarely change default settings).

5. Google has steadily expanded into new web properties and services that, directly or indirectly, funnel users to its search engine.

Now it’s true that, if you want to define market liquidity as a type of network effect, Google enjoys a strong network effect on the advertising side of its business (which is where it makes its money), but it would be a mistake to say that the advertising-side network effect has anything to do with Google’s dominance of the searches of web users.

The Google example, far from providing support to O’Reilly’s argument that the network effect is the main way to achieve dominance on the modern web – that it is the secret to the success of “all Web 2.0 superstar applications” – actually undercuts that argument. And there are other examples we might point to as well. Apple’s iTunes online store and software system has achieved dominance in digital music distribution not through the network effect (the company only recently got around to introducing a music recommendation engine that derives value from aggregating data on users’ choices) but rather through superior product and software design, superb marketing and branding, smart partnerships, and proprietary file standards that tend to lock in users. There are plenty of “social” online music services built on the network effect; none of them has dented Apple’s dominance. (I would also take issue with O’Reilly’s suggestion that Wikipedia’s success derives mainly from the network effect; Wikipedia doesn’t become any more valuable to me if my neighbor starts using it. Wikipedia’s success is probably better explained in terms of scale and scope advantages, and perhaps even its nonprofit status, than in terms of the network effect.)

“Ultimately, on the network, applications win if they get better the more people use them.” That’s a huge overstatement. Applications, or other kinds of online services, win for many reasons on the network. To be sure, one possible reason is the network effect. I’ve already mentioned Facebook’s success as an example. But there are plenty of smart network-effect services, including ones that O’Reilly singled out back in 2005, like Flickr and del.icio.us, that have not achieved widespread success. They definitely “get better the more people use them,” but they haven’t “won.” And there are plenty of other popular online applications – Turbotax Online, Apple’s MobileMe, MapQuest, Yahoo Mail, Basecamp, Google Reader, Mint, Zoho, etc. – that have achieved success not because of the network effect but because they’re useful, well-designed tools. (The original success of Salesforce.com, the most famous business web app, had nothing to do with the network effect, though Salesforce is now wisely trying to tap into the network effect, through, for instance, its force.com development platform, to extend its success.)

So what does this mean for the eventual shape of the cloud computing business? One thing it means is that, even on the pure infrastructure end of the industry, power-law distributions (wherein a small number of companies end up capturing most of the business) may well emerge for reasons having little or nothing to do with the network effect. Indeed, this new industry seems particularly well suited to a concentration of market power. Here are some of the reasons why:

1. Capital intensity. Building a large utility computing system requires lots of capital, which itself presents a big barrier to entry.

2. Scale advantages. As O’Reilly himself notes, big players reap important scale economies in equipment, labor, real estate, electricity, and other inputs.

3. Diversity factor. One of the big advantages that accrue to utilities is their ability to make demand flatter and more predictable (by serving a diverse group of customers with varying demand patterns), which in turn allows them to use their capital more efficiently. As your customer base expands, so does your diversity factor and hence your efficiency advantage and your ability to undercut your less-efficient competitors’ prices.

4. Expertise advantages. Brilliant computer scientists and engineers are scarce.

5. Brand and marketing advantages. They still matter – a lot – and they probably matter most of all when it comes to the purchasing decisions of large, conservative companies.

6. Proprietary systems that create some form of lock-in. Don’t assume that “open” systems are attractive to mainstream buyers simply because of their openness. In fact, proprietary systems often better fulfill buyer requirements, particularly in the early stages of a market’s development. As IT analyst James Governor writes in a comment on Macleod’s post, “customers always vote with their feet, and they tend vote for something somewhat proprietary – see Salesforce APEX and iPhone apps for example. Experience always comes before open. Even supposed open standards dorks these days are rushing headlong into the walled garden of gorgeousness we like to call Apple Computers.”

The network effect is indeed an important force shaping business online, and O’Reilly is right to remind us of that fact. (I should also mention that O’Reilly’s post includes other points that I’ve not discussed here.) But he’s wrong to suggest that the network effect is the only or the most powerful means of achieving superior market share or profitability online or that it will be the defining formative factor for cloud computing. Hugh MacLeod is probably right that we will in time see a striking concentration of market power in the cloud computing industry, and the network effect probably won’t have all that much to do with it.

O’Reilly seems determined to believe that his definition of Web 2.0 explains everything about the online business world today, arguing that “the cloud platform, like the software platform before it, has new rules for competitive advantage [and] chief among those advantages are those that we’ve identified as ‘Web 2.0,’ the design of systems that harness network effects to get better the more people use them.” That’s a half-truth parading as a truth. While the cloud may explain Web 2.0, Web 2.0 doesn’t explain the cloud.

UPDATE: Tom Slee chimes in: “if we are to really get to grips with industry concentration in new Internet-driven industries we need to acknowledge both sides of the story – that different industries are pushed by different forces (different parts of cloud computing will see different levels of concentration), and that there are multiple sources of increasing returns (it’s not all Web 2.0 network effects).” I agree with that, though I would add that one of the big question marks about the ultimate structure of the cloud computing industry – maybe the biggest – is whether the “different parts” of the cloud (infrastructure, development platform, apps) will remain separate or whether they will collapse into a single supply model. Will, in other words, the companies that run the data centers also end up supplying the apps? Already, Google is pursuing a model that spans all three layers, raising the question: Will the scale advantages in running the infrastructure also lead to advantages in supplying the apps? I hope not, for the sake of maintaining a robust web apps sector, but my guess is that they will.

The ultimate social network

The British design collective rAndom International, together with artist-programmer Chris O’Shea, have created an exhibition, called Audience, that provides, literally as well as figuratively, a mirror image of our hyperconnected selves. The installation consists of 64 robotic mirrors, each of which “moves its head in a particular way to give it different characteristics of human behaviour. Some chat amongst themselves, some shy away and others confidently move to grab your attention.”

But what’s most interesting is the way the mirror-bots behave as a social network: “When members of the audience occupy the space, the mirrors inquisitively follow someone that they find interesting. Having chosen their subject, they all synchronise and turn their heads towards them. Suddenly that person can see their reflection in all of the mirrors. They will watch this person until they become disinterested, then either seek out another subject or return to their private chatter. The collective behaviour of the objects is beyond the control of the viewer, as it is left entirely to their discretion to let go of their subject.”

I’m agape.

The exhibition debuted last month at the Royal Opera House in London, where the videos were shot. I discovered it via a post by Kottke who found it via a post by Sippey. And so, link by link, we turn our collective gaze toward the mirrors that return the favor. It seems like something Escher might have dreamt up.

The Economist tours the cloud

The new issue of The Economist features a good primer on cloud computing, written by Ludwig Siegele, which looks at trends in data centers, software, networked devices, and IT economics and speculates about the broader implications for businesses and nations. A free pdf of the entire report is also available.

Siegele notes that the hype surrounding the term “cloud computing” may have peaked already – Google searches for the phrase have fallen after a big spike in July – but that “even if the term is already passé, the cloud itself is here to stay and to grow. It follows naturally from the combination of ever cheaper and more powerful processors with ever faster and more ubiquitous networks. As a result, data centres are becoming factories for computing services on an industrial scale; software is increasingly being delivered as an online service; and wireless networks connect more and more devices to such offerings.” The “precipitation from the cloud,” he concludes (milking the passé metaphor one last time), “will be huge.”