Alan Turing, cloud computing and IT’s future

The business computing site Internet.com asked me to write an essay speculating on what the corporate IT landscape may look like ten years from now. The result, “IT in 2018: from Turing’s machine to the computing cloud,” is available now as a free pdf download – though registration is required.

Here’s how the essay begins:

In 1936, as the clouds of war gathered once again over Europe, a 24-year-old Cambridge University mathematician named Alan Turing invented the modern digital computer. At least, he invented the idea of the modern digital computer, which, as it turned out, was far more important than constructing any particular physical manifestation of that computer.

Turing’s theoretical apparatus, which he called a “universal computing machine,” was a simple one. In essence, it had the ability to read or write symbols – a one or a zero, say – on an endless roll of paper. It could only take one action at a time, reading or writing a single symbol, but it could remember what it had done, and over an infinite span of time it could take an infinite number of actions.

What Turing had created was, in the words of the historian George Dyson, “a single machine that can exactly duplicate the behavior of any other computing machine.” Any calculation, no matter how complex, can be reduced to a series of discrete, simple steps –an algorithm, or a code – and carried out by Turing’s machine. What that means, quoting Dyson again, is that “in principle all digital computers are equivalent; any machine that can count, take notes, and follow instructions can compute any computable function.” What it also means is this: “Software (coding) can always be substituted for hardware (switching).”

The only real constraints on a universal computing machine are the size of its memory and the speed with which it can carry out its calculations and transmit the results. With enough memory and enough speed, Turing’s work implies, a single computer could be programmed, with software code, to do all the work that is today done by all the other physical computers in the world.

And that is why the modern corporate data center, with all its complex and expensive stacks of machinery, is on the path to obsolescence …

Mr. Turing also pops up in my column in The Guardian today.

11 thoughts on “Alan Turing, cloud computing and IT’s future

  1. Nick Carr

    I think you’re confusing the Turing machine with the Turing test. This is about virtualization, not AI.

    As for 1988, I believe I was playing Scarab of Ra on my Mac Plus.

    Nick

  2. MarcFarley

    On one hand, I don’t know why any company would want the headaches of running a major operation that did not contribute to the bottom line. On the other hand, I see a lot of difficulties in making the cloud work as advertised. Scaling up to such enormous boundaries is not a “simple matter of programming”.

    I see two major options: clouds and data centers in a box. The big issue is the cost of managing data resources. It doesn’t matter all that much if the equipment is located in the cloud or in a room/closet as long as you can get what you need at a competitive cost. The management cloud will extend into to business compute room. People will still want their own physical data plant to get the service level they want. The cloud WILL become congested in various ways – that’s what large scale sharing gets you. Some will choose options that respond better than the status quo.

    By 2018 there will also be organizations that generate their own power to ensure they don’t have to depend on a crumbling infrastructure to get it. It will be inexpensive enough for them to do it.

  3. Botchagalupe

    Oh good you scared me for a minute. I was hoping I didn’t have to waste a weekend trying to find my old LISP and Prolog books in the garage, when you made your reference to AI on Page 7 “IT in 2018: From Turing’s Machine to the Computing Cloud”

    “Expertise in parallel processing, virtualization, artificial intelligence, energy management and cooling, encryption, high-speed networking,

    and related fields will be coveted and rewarded.”

    Anyway in 1988 I was working on one of those old IBM machines trying to do IT automation with LISP and Prolog doing the the latest and greatest “Hype” technology.

    johnmwillis.com

  4. Nick Carr

    Marc, Well put. I largely agree with you, at least for the medium term. Longer term, the economics will favor ditching the local plant, or at least most of it.

    John, If you look at trends in search and related fields, as well as at the types of folks that Google et al. are aggressively hiring, I think it’s fair to say that developing deep expertise in AI would not be a bad career choice.

    Nick

  5. Michael_ONeil

    Nick,

    Thank you for the paper – it’s a very interesting view of the future! I think, though, that you might want to invest in a wide-screen crystal ball. Cloud computing is important, but over the next decade there will be many other important developments…

    If you’re interested, you can check out my take here:

    http://www.itincanada.ca/extern/We-Centric2/presentation.html

    User name: itic

    Password: future

  6. Paul Wallis

    Nick,

    During 2003, the late Jim Gray made an analysis of computing economics:

    “’On Demand’ computing is only economical for very cpu-intensive (100,000 instructions per byte or a cpu-day-per gigabyte of network traffic) applications. Pre-provisioned computing is likely to be more economical for most applications – especially data-intensive ones.”

    And

    “If telecom prices drop faster than Moore’s law, the analysis fails. If telecom prices drop slower than Moore’s law, the analysis becomes stronger.”

    Since then, Telecom prices have fallen and bandwidth has increased, but more slowly than processing power, leaving the economics worse than in 2003.

    By 2012, the proposed Blue Gene/Q will operate at about 10,000 TFLOPS outstripping Moore’s law by a factor of about 10.

    I’ve tried to put The Cloud in historical context and discussed some of its forerunners here . My take is that:

    “I’m sure that advances will appear over the coming years to bring us closer, but at the moment there are too many issues and costs with network traffic and data movements to allow it to happen for all but select processor intensive applications, such as image rendering and finite modelling.”

    I don’t know if we’ll be in The Cloud by 2018 but given current technological circumstances, and recent events like The Gulf cables being cut and Amazon S3 failing, today the business is being asked to take a leap of faith to put mission critical applications in The Cloud.

    Of course the real question Nick, is how will your supercomputer compare with The Milliard Gargantu-Brain at Maximegalon, which can count all the atoms in a star in a millisecond?

    PJW

  7. Venkatesh Rao

    Just finished reviewing your book so I thought I’d come on over and check out your blog.

    Interesting point about UTMs. Never seen it taken quite as literally before, but yes, conceptually I suppose your iGod computer could basically be a UTM with a really really large tape.

    But something worries me about that argument. I am not sure quite what as yet, but you probably want to check out Seth Godin’s “Programming the Universe” for the most extreme trippy version of that story, taken to the logical “the universe is a computer” limit.

    Perhaps my problem with the UTM-cloud computing connection is that it is merely pleasing conceptually without having any substantial consequences. It seems to belong at the level of analysis where the ecosystem is a giant computer. Yes, but what language can I program Gaia in? If programming this emerging beast remains at the level of contributing bits and pieces to an emergent computational reality (a la Facebook widgets) then we need a different conceptual model of the computer that does for the SaaS age what Von Neumann’s Turing-equivalent architecture did for the box age.

    Food for thought though. And nice book btw :)

    Venkat

  8. Linuxguru1968

    Nick:

    >> The legions of workers…replaced by a small

    >> squad of architects that use simple management

    >> programs …

    The “small squad” could actually and will most likely be the company’s Board of Directors or executive officers with “middle management” being completely eliminated. These people will not be “IT people” in the sense we understand them now because these future IT tools will be as ubiquitous as other office tools like pencils, rulers and staplers requiring no specials knowledge of science and engineering. There will of course be tool architects working at the major information utilities that build the automated tools. But, the number of them compared to the population of the world will be very small: 1 in 1,000,000,000 or greater. In the future, you would be more likely to win the lottery than make a living as information tool architect!

Comments are closed.