The thinnest client

Most of the current discussion about the big changes under way in computing focuses on the software side, particularly on the shift from locally installed software to software supplied as a service over the internet. That’s what all the Web 2.0 fuss is about. Less attention has been paid, so far, to the equally dramatic shift that the new utility-computing model portends for hardware. The ability to deliver ever richer applications from distant, central servers will lead to the increasing centralization of hardware components as well – and that, in turn, will open the door to hardware innovation and entrepreneurship.

Take Newnham Research, a startup in Cambridge, England. It’s developing the thinnest of thin clients – a simple monitor adapter, with a couple of megabytes of video ram and ports for a mouse and keyboard, that plugs directly into an ethernet network. All computing is done on a central server, which can be either a traditional server or an inexpensive PC. The only thing delivered to the monitor, directly over the network, are compressed pixels. The device is called Nivo – for “network-in, video-out.”

Newnham, which is being backed by Atlas Ventures and Benchmark, isn’t producing its technology in volume yet. On its site, it suggests a few applications tied to the ease with which you can drive multiple monitors from a single PC, but it’s easy to think of much broader applications as well, particularly in schools, shops and small offices.

One nonprofit company, called Ndiyo, is already putting Newnham’s hardware innovations to work – as a way to deliver computing to people who haven’t previously been able to afford it. Ndiyo began with a simple goal: “Instead of starting with a PC and seeing what we could take out, we began with a monitor and asked what was the minimum we had to add to give a workstation fully capable of typical ‘office’ use.” It’s created a system, using open-source software like Linux, OpenOffice, Firefox, and Evolution, that allows a half-dozen users to share a single PC simultaneously, all doing different things. Need to add more users? Add another PC to create a little Linux cluster, and off you go. (For more details, you can download a pdf presentation from Ndiyo.)

Bill Gates and Ray Ozzie talk about the disruption of the software market. Hardware’s going to go through a disruption, too – and it’s about time.

8 thoughts on “The thinnest client

  1. Anthony Cowley

    I’m not as sure about the hardware trends as you. It seems to me that I’m surrounded more and more by devices with ever-growing computational capacity. Why should I squander the ability to carry around a fast, cheap processor in all my devices and put all the burden on the network? I do think that data hosting services will become de rigueur, and while I can easily imagine a world with centralized processing, it just doesn’t seem like the only conclusion to me.

    The other future vision of hardware is the old toaster with an IP address idea, where every device/appliance has a microprocessor, and since they can all communicate on the network you can exploit them all. I’m not ready to discount this alternative trend yet.

  2. Alexis Argyris

    In what way is Nivo different from the graphics terminals connected to mainframes of the 70s and early 80s? I’m not so sure that this full swing back to concepts 20 years old represents real innovation. It looks as if the PC revolution and the “empower the user” attitude was just a 20 year-parenthesis to enriched the old terminal with video capability and more speed.

    As Anthony seems to imply I’d rather look forward to a future of hundreds (thousands?) nano-CPUs doing auto-orchestrated work all around me than carrying a dumb terminal.

  3. Filip Verhaeghe

    Nick,

    Everyday practical experience doesn’t align with this prediction. Every system operator at every company does this every working day. He uses software like “Remote Desktop Connection” (standard in Windows/Mac/Linux) to transfer the pixels from the server machine’s screen to his own screen over fast LAN connections. This helps him to administer server in secure rooms without leaving his desk. Now ask him if he’d want to work like that for reading his email or writing documents (we’re not even talking about movies). Don’t be surprised if you get thrown out of the room.

    As I wrote before, “network in, video out” is a great idea. No more local software installation. No more local maintenance worries of any kind. But don’t remove that cheap CPU from the local machine. Get rid of the hard disk if you feel like it, but keep the local processor. The mainframe era really wasn’t that great.

    Microsoft Vista is delivering some innovation in this area with its XML Application Markup Language (XAML, part of the Windows Presentation Foundation, and it will also be available free of charge for Windows XP). You may prefer XUL (Linux) or other similar product, that’s fine too, I don’t want to go into the differences here. It allows to specify what the users should see and how he can interact, using XML. XAML is to applications, what HTML was to documents. This allows delivery of applications over the Internet, but using local processing for a nice user interface and the smooth interaction we are all used to.

    The next step is to allow take the care out of the operating system. Be it Linux or Windows, but I really don’t want to worry about configuration, security or the fact that it slows down over time. It would be nice if the vendor took care of that.

    This is just one reason to get all excited about Microsoft embracing the Internet as a platform. Having Microsoft or some of its partners take over the complete maintenance tasks would make sense when you look at the desktop from the Internet’s viewpoint. What I am saying is also not “impossible” or “far away”. Today, some system administrators are remote administering ten thousands of corporate PCs from behind their desk. All it would take is for this approach to work on Internet scale. I bet consumers would be willing to pay about $20 a year for a service like that. If you can scale and automate, that may be a nice business.

    And of course, Linux vendors or Apple may offer similar services.

    What I am proposing is still utility computing. I have just included a processor inside the screen, and have used some vendor to enable XML application delivery instead of pixel-based delivery (using a transparent layer of software we currently think of as an operating system, but in the future may not think of at all). Meanwhile, under my approach, the server delivering the application can handle the Internet scale without extreme computing power. In fact, it isn’t that different from current web servers.

    — Filip

  4. Nick

    Anthony: I agree with you. The fragmentation of end-user computing devices and the consolidation of corporate computing resources seem to me two sides of the same (rich network) coin. Both suggest the end of the PC-centric era, as personal computing splits away from the personal computer. (That doesn’t mean PCs go away, just that their role and functions change.) As for automated, network-enabled toasters, that seems like a fire hazard.

    Alexis: What’s changed between the mainframe era and today? A hell of a lot, including IP. In the mainframe era, we had efficient but non-personal computing. In the client-server era, we had persoanl but inefficient computing. The utility era offers the possibility of personal and efficient computing. But I don’t disagree with your point about having a mesh of small, smart devices – the models aren’t mutually exclusive.

    Filip: System operators don’t necessarily represent the average business user, and they certainly don’t represent all business users. If you need a local processor, you’ll get a local processor.

  5. Applied Abstractions

    Ultra-thin client computing for the masses

    The ultra-thin client from Newnham Research and Ndiyo is a really good idea, the solution to classroom computing everywhere. With WiFi and a couple of USB ports, this could allow you to set up workstations everywhere. A stable setup with…

  6. Filip Verhaeghe

    Maybe I was misunderstood about those system operators. The point was that system operators (unlike ordinary users) have the experience you predict today already, every day, and they’re not liking it (for ordinary desktop use), even on fast local networks. By extension, I don’t think ordinary users will like the experience either.

    Off course, I agree with you that this is the end of the PC-centric era in the way you define it. That the mighty PC will be replace by remote software and remote storage, capabel of running on many devices. But I don’t buy into the concept that some central server will do my processing. Local processors are just to cheap to justify that approach. Central (application) servers will tell my local processor what to do, of course.

    Would I want applications as pixels, when I can have them as XML Markup that automatically rescales to my current screen size, current orientation on my current device? Will new smart people exploit this new markup language in unanticipated ways, like Google did with HTML? I believe in vendors delivering applications over the Internet. But not as pixels.

  7. Anonymous

    The difference that I see between the old shared services/mainframe model and the web services model is in communications. Back then, access to processing and data was provided over slow, low-bandwidth modems—the best thing going at the time. Today, we have a high-bandwidth, reliable, and readily accessible network over which to deliver essentially the same products: processing (i.e. software) and data. So I agree with Alexis that in a real sense, the web services model represents a cycling back to the old mainframe model, whether for good or ill.

    However, this issue of where the software will reside and who will control it is, in my opinion, much less important and interesting than the question of where the data will be located and who will control it. One reason for the shift from shared services to in-house mainframes (followed by mini-computers, then client-server networks) was the desire of organizations to control and protect their own data. Data residing on shared mainframes was vulnerable on a number of fronts. When computing was brought in-house, businesses could take responsibility for their own data security, rather than trust their service provider to keep it safe. In order for Web 2.0 to succeed in the arena of business computing, the issue of data security will need to be addressed. Will businesses be willing to trust a service provider with their data? Will businesses adopt a split computing model in which they purchase software services from third parties but retain data in-house? Or will businesses adopt an in-house web services model in which the organization becomes the software service provider and is also the data repository, while end users work on thin client machines that are mainly just terminals? Each of these models has its advantages and disadvantages. None is without precedent.

    As has always been the case, the real money in computing, both hardware and software, is in the corporate world. An advertising-supported web services model ignores this lucrative market in favor of an audience of individuals who treat hardware as a commodity and software as a freebie.

  8. Sam S

    I think it will be of great use for schools, particularly in developing countries. We can go for it if that solution is created based upon complete open technologies stack. Looks like software is open but hardware piece is not. Hope Ndiyo will also have hardware along with its software.

Comments are closed.