« Cloud gazing | Main | The artificial morality of the robot warrior »

The avatar of my father

February 16, 2009

HORATIO: O day and night, but this is wondrous strange.

The Singularity - the prophesied moment when artificial intelligence leaps ahead of human intelligence, rendering man both obsolete and immortal - has been jokingly called "the rapture of the geeks." But to Ray Kurzweil, the most famous of the Singularitarians, it's no joke. In a profile in the current issue of Rolling Stone (not available online), Kurzweil describes how, in the wake of the Singularity, it will become possible not only to preserve living people for eternity (by uploading their minds into computers) but to resurrect the dead.

Kurzweil looks forward in particular to his reunion with his beloved father, Fredric, who died in 1970. "Kurzweil's most ambitious plan for after the Singularity," writes Rolling Stone's David Kushner, "is also his most personal":

Using technology, he plans to bring his dead father back to life. Kurzweil reveals this to me near the end of our conversation ... In a soft voice, he explains how the resurrection would work. "We can find some of his DNA around his grave site - that's a lot of information right there," he says. "The AI will send down some nanobots and get some bone or teeth and extract some DNA and put it all together. Then they'll get some information from my brain and anyone else who still remembers him."

When I ask how exactly they'll extract the knowledge from his brain, Kurzweil bristles, as if the answer should be obvious: "Just send nanobots into my brain and reconstruct my recollections and memories." The machines will capture everything: the piggyback ride to the grocery store, the bedtime reading of Tom Swift, the moment he and his father rejoiced when the letter of acceptance from MIT arrived. To provide the nanobots with even more information, Kurzweil is safeguarding the boxes of his dad's mementos, so the artificial intelligence has as much data as possible from which to reconstruct him. Father 2.0 could take many forms, he says, from a virtual-reality avatar to a fully functioning robot ... "If you can bring back life that was valuable in the past, it should be valuable in the future."

There's a real poignancy to Kurzweil's dream of bringing his dad back to life by weaving together strands of DNA and strands of memory. I could imagine a novel - by Ray Bradbury, maybe - constructed around his otherworldly yearning. Death makes strange even the most rational of minds.


Sorry, I can't believe that Kurtzweil is other than either lying or schizo/pschotic. He's too well connected and (reported to be) lucid to believe what he's spouting unless he's just that out of it.

Maybe he's a kind of sadist / semi-(benevolent)-sociopath who started spinning this yarn years ago precisely in hopes of snowing a bunch of socialite-capitalists into granting him their good name. Joke's getting a bit stale, though.


Posted by: Tom Lord [TypeKey Profile Page] at February 16, 2009 07:42 PM

I have already expressed my doubts about Singularity on this blog, but resurrecting dead people based on DNA and peers memory sounds like wishful thinking.

In Kurzweil's model, a personality can be completely defined by DNA and external memories. I understand that DNA influences our personality a lot. But what about personal experiences, in particular as a kid or teen? Don't they shape our personalities too?

How do you recreate, say, years of practice of storytelling? If Mr. Kurzweil's father was a sharp mind it was probably not just defined by DNA - he probably had been exercising his brain for years. Unless of course you believe nature is everything and nurture is nothing.

Posted by: Laurent [TypeKey Profile Page] at February 16, 2009 07:43 PM

Laurent, you don't fully appreciate the depth of the sociopathy here.

Kurzweil is saying that "if it looks a bit like, smells a bit like my father -- and somehow remembers (as if having had) experiences that jive with my memories -- then I count that 'device' as my father, for practical purposes". "Make me 10 of them and let me compare. I'll use whichever one I wnat on a given day."

You are supposed to have one of two reactions. You're supposed to go "Gee whiz! Really?!? Sign me up, that's cool!" in which case you look like a fool.

Or, you're supposed to say "Oh, that's so philosophically wrong! It's immoral. How dare you contemplate such evil tech!" In which case you look like a fool.

Or, you can say: um, you know this guy is just bs'ing everyone, right?

Nanobots reading his memory for details of his father and transferring it meaninfully... that's funny! I mean, over the past 30 or so years I'm sure Kurtzweil has privately had his ear bent about 1000 different times about how utterly full of sh-t this concept is.

But, you know, there are snipes in the woods tonight and, well, this camp fire won't stay lit much longer, so gather around while Ray tell's y'all an important and pointed little story...


Posted by: Tom Lord [TypeKey Profile Page] at February 16, 2009 07:58 PM

Bringing back memories, and reintegrating into a recognizable intelligence, is indeed far-fetched and pretty unlikely. However, the core concept of a singularity - hyper-exponential growth in technological capability - is theoretically solid.

Human thought is an algorithm running on the laws of physics. Sooner or later that algorithm will be understood, simulated and improved upon. Computers will be able to simulate the algorithm because they are also implemented on top of the laws of physics. If digital computers along the lines of the von Neumann machine, or physical equivalents of Turing machines or lambda calculus reducers do not have sufficient semantic richness, other operators may be introduced bringing the required physical law into the theoretical framework. There's nothing magical about human thought.

Posted by: Barry Kelly [TypeKey Profile Page] at February 16, 2009 08:04 PM

Tom - don't forget that people are devices too. Special ones because it's us that's doing the talking - but machines nonetheless.

I do agree with you, however, that hoping to create future appearances based on inferring hidden models from messy and imperfect data about past appearances, is extremely optimistic.

Posted by: Barry Kelly [TypeKey Profile Page] at February 16, 2009 08:08 PM

@Barry: A curious fact about the nature of "algorithms running on the laws of physics" in the sense you use the term is that the laws of physics prevent many of those algorithms from being simulated (run) in any other way than by the one unique way they are actually run. Simply saying that a macroscropic phenomenon is in some mathematical sense "an algorithm" *does not imply* that you can build a machine to do the same thing. Fact of life.

@Barry(2): Did you hear that, just now? Back behind the tents? That was a snipe. I'm dousing the fire. Every man for himself!


Posted by: Tom Lord [TypeKey Profile Page] at February 16, 2009 08:20 PM

This unfortunate comment reveals that Kurzweil does not have a clue what he's talking about. You cannot just re-run somebody's genome, put in some random biographical information and expect the lost person to come back.

The _only_ hope for restoring dead people lies literally in the process of retrieving accurate high-resolution molecular data from their brains while they're still alive. And if they're already dead at the time when this brainscan technology is discovered, you better hope advanced physics will give us some way to probe the universe back in time!

There really is no other way. When we die, the molecular patterns that made up our mind rapidly dissolve and are lost to entropy.

Cloning on the other hand will do nothing but get you a blank body. These are neurological discoveries that have been made decades ago! It's quite disillusioning to see that an alleged expert in transhumanism like Kurzweil has such a fragile grasp on the actual science required for those processes.

What he describes is a process of making a body that looks like a person you have known, and he hints at some mental conditioning aimed at making that _new person_ think they are someone dead in order to impersonate them for the benefit of the people left behind. That's not right. If we're making "resurrection" an objective, we should at least get our facts straight.

I empathize with him, I really do. And to be fair, there is so much more to do before we can even visit this issue in earnest, like for example, making the actual breakthrough advances required for a singularity-like future. But I worry about the implications of this perfect display of incompetence and ignorance. It shows we've clearly given too much credit to the wrong people.

Posted by: udo [TypeKey Profile Page] at February 16, 2009 08:29 PM

Ah, the discovery of humanities types of the old themes as expressed in good Science Fiction. What is the nature of mind? If it's a bunch of algorithms, can you duplicate it, back it up, emulate it? Can you extrapolate it using partial information, in memories?

It's telling you bring up Ray Bradbury, one of the most literary writers of the area. It's old, old, territory in hard SF.

Posted by: Seth Finkelstein [TypeKey Profile Page] at February 16, 2009 08:32 PM

>> Using technology, he plans to bring his
>> dead father back to life.

I hope they don't host the resurection software on Windows. It would be funny if he got BSOD just before his dad rises from the grave ....

Posted by: Linuxguru1968 [TypeKey Profile Page] at February 16, 2009 08:56 PM

I'm yet to read the article, so hopefully there's some missing context that makes the excerpt sound less strange.

If he's so confident that advanced technologies can resurrect his father, why isn't he confident that the same technologies could resurrect him too? It would save him the trouble of counting out his 200 pills each day.

Posted by: LuckyLuciano [TypeKey Profile Page] at February 16, 2009 09:17 PM

I have much respect for Kurzweil, his brilliant mind goes often toward projects aimed to help people's suffering. But he embodies the apotheosis of Descartes' delusion in understanding and controllig the whole existence through the intellect (and electronics in his case).

Immortality has been pursued by Taoism as well, but they were looking to connect to the eternal components of our soul, from an inner perspective. I wrote about the Singularity and Kurzweil on The Singularity is Nearest

Posted by: Ivo Quartiroli [TypeKey Profile Page] at February 16, 2009 10:02 PM

You're too kind to him, Seth.

Bradbury never pretended to not be talking fiction, mainly - though perhaps he did say he was using fiction to illustrate certain philosophical concepts. He didn't lie. He didn't pimp BS as if it was fact.

He didn't inspire crap like a "singularity university" subsidized by taxpayers, a toy of google yahoos, allegedly for big high-falootin' purposes but really intended to get a certain style of grad student to pay to pitch to the silly valley vc community.

But, in fairness, I guess they're all just being "colorful".


Posted by: Tom Lord [TypeKey Profile Page] at February 17, 2009 01:33 AM

I was talking more about Nick's reaction, rather than what would seem very prosaic to someone raised on science fiction stories.

I don't know if Ray Kurzweil really believes what he's saying, or is running a game. Pretty odd game if so, but I suppose there's weirder.

My reaction to the ideas is "Yeah, those SF stories were cool, and it'd be fantastic to live in that future, but it's not happening anytime soon, if ever".

Barry Kelly: The key flaw is that there's no reason to assume any intelligence of any sort can
exponentially recurse self-improvement, and some good reasons to think that it cannot. This is never going to work to deprogram (pun unintended) True Believers though, who are just going to say the opposite.

Posted by: Seth Finkelstein [TypeKey Profile Page] at February 17, 2009 03:29 AM

The book's already been written: 'Accelerando' by Charles Stross. It was shortlisted for two awards -- the 2006 Arthur C. Clarke award for best science fiction novel, and the BSFA award for best SF/F novel of 2005. Interesting read.

Posted by: BryanK [TypeKey Profile Page] at February 17, 2009 05:28 AM

Somebody get that man some Kool-aid antidote, stat! It's fascinating to me that the technological fantasy that is the singularity could potentially be rooted in a yearning for lost loved ones. I think we can all sympathise with that very human need, yet at the same time recognise that our respect for the idea (if we had any) is diminished. I wrote a counter to the singularity theory that I've called the 'Shakespeare theory' on my blog here: http://www.bcs.org/server.php?show=ConBlogEntry.859

On the subject of AI and cognition, I believe that the idea we can simply discover an algorithm and sort the whole thing out is based on some highly-flaky assumptions. I remember discussing aspects of this at University with Prof Stevan Harnad, who is a real expert on this...and also discovered that the archive discussion from 1995 is still on the web - http://users.ecs.soton.ac.uk/harnad/Hypermail/Cogpsy/0035.html - very strange experience to hear a younger version of myself on the web. It's the first time it's really happened to me.

Posted by: David Evans [TypeKey Profile Page] at February 17, 2009 10:34 AM

Maybe our complete lives are already imprinted within our DNA? Science for the longest time said most DNA was "junk DNA." But what if it does record everything?

Instinct is memory passed on. Maybe tales of reincarnation are just DNA memories. Our current computing technology and storage capabilities are achieved with binary code, 1s and 0s. Maybe our DNA code, ACGT, is exponentially more capable, enough to record a complete lifetime or even every life in our chain back to the beginning.

Maybe we just need to learn how to read it to "view" all of our history.

Now there's an invention: a DNA reader to see all of history through a person's DNA lineage.

Or maybe I should write that story.

Posted by: ordaj [TypeKey Profile Page] at February 17, 2009 01:22 PM


I don't see how it would be an "odd" game. It seems pretty lucrative. You come by having audience with big, loose money. You build a reputation as a clever, well-connected, scientifically-informed BS artist. You hit big silly themes like infinite intelligence or immortality - great stuff for a cocktail party. That crowd takes your BS to be good BS - i.e., plausible stories. They become your patrons as you run cover for things like the "pay to pitch" scheme running at Moffit field now.

Only, deep down, you (the "BS artist") don't respect your patrons. And you know enough science to see for yourself the divergence between what your patrons think is plausible BS and what the scientists and engineers think is plausible BS. And you see that the closer a scientist or engineer is to the big money crowd, the less likely they are to be frank about the when the monied one's are mistaking ridiculous, over-the-top, this-is-a-joke-can't-you-see BS for plausible BS. In the echo chamber of your patrons, they can't see their own cluelessness but a lot of people outside those loops can.

So you pants them.

Sounds like a not-very-strange game to me although, as you can tell, I don't approve: it's rude, indirect, self-serving, and socially destructive. One understands the counter-cultural subversive instinct that it contains but -- it ain't the right way to do it.


Posted by: Tom Lord [TypeKey Profile Page] at February 17, 2009 04:38 PM

I haven't investigated, but my impression was that Ray Kurzweil, while not superrich, was well-off enough and talented enough from his work that it wouldn't be worthwhile for him to be running technospiritualism games. I can see there would be a relatively modest amount of money, but not more than he could personally do "straight".

This is in contrast to, for example, groups of people I see who seem to have decided that being a college professor in the humanities is far inferior to being a huckerster selling Internet hype to worried corporate managers. That's understandable economically, since the salary of an ordinary good humanities professor is far below an ordinary good corporation consultant, and the financial upside doesn't compare. But he's in an entirely different category.

I could be wrong, some of the system is a mystery to me.

Posted by: Seth Finkelstein [TypeKey Profile Page] at February 17, 2009 06:31 PM

The poingnancy your refer to Nicholas is something I wrote about in a blog post I titled - Life in Limbo, iron age to information age (linked in my sig). I hope you'll forgive me for reproducing it in full here but I think it ties in well with your point -

Life in Limbo - Iron Age to Information Age

lim-bo [lim-boh]

* neither dead nor alive
* an intermediate, transitional, or midway state or place.
* a place or state of imprisonment or confinement.

Bog bodies, also known as bog people, are preserved human bodies found in sphagnum bogs in Northern Europe, Great Britain and Ireland. Unlike most ancient human remains, bog bodies have retained skin and internal organs due to the unusual conditions of preservation.

A few nights ago I watched a fascinating Timewatch documentary on the Iron Age bog bodies found in Ireland in 2003. The brutal murders of Clonycavan Man and Old Croghan Man were both carbon dated to around 300 BC and theorized to be have been carried out as sacrifice to the gods of fertility. Not only were the twenty-something victims physically tortured before death but also, ghoulishly, subjected to the ultimate in metaphysical torment - their bodies were tied down in the shallow water of the bog where they would be preserved for all time in a state of limbo, neither in the land of the living nor of the dead but imprisoned between.

I doubt that in 300 BC the grieving relatives of Clonycavan Man and Old Croghan Man had any portrait or physical image of their loved ones to hold on to. Other than a brooch or item of clothing perhaps, the murdered men had vanished off the face of the earth and with them, essentially, all traces that they'd ever lived.

Fast forward two millenia. In August 2006 I lost my father to a short battle with cancer. In the weeks and months before Dad's passing I took dozens of photos and videos on my mobile phone during days out, birthdays, family get-togethers and random non-events. I know that Dad features in many of those digital records but 18 months on I still can't bring myself to review them. I treasure them but I can't look. I've stored them on a hard drive, archived to CD and backed up to the web because I know sometime, I'll want to share them with the whole family. But not for now. It would only torment me.

Fast forward again. Steven Spielberg's sci-fi thriller Minority Report stars Tom Cruise as a futuristic policeman driven by guilt and the memory of his kidnapped son Sean to prevent similar crimes befalling other families. Cruise's character, John Anderton compulsively plays back holographic home movies of Sean, in order to relive happier times. Ironically, reliving the pain of losing Sean as each movie comes to an end. Tormenting himself.

A year ago my nighttime dreams were dominated by memories of my father. I frequently woke up with his face and voice still vividly in my mind. And then, as the realization set in that it was only an illusion a deep and unsettling despondency enveloped me followed by a desire to return to sleep and replay the dream. I didn't want to let go.

Letting go. It's one of the final stages in the process of grieving - (1) emotional numbness, (2) deep yearning, (3) anger / guilt, (4) sadness / loneliness, (5) letting go / acceptance, (6) hope. John Anderton didn't want to let go either. His anger and guilt were overlapping with sadness and loneliness. So he held onto Sean by projecting his dreamlike hologram, or avatar, in mid-air and reliving the past. Over and over again.

Avatar is the name of James Cameron's new movie, now in production, due for release in December 2009. In Cameron's original script treatment of Avatar a man tries to make his way as a miner by combining with an alien during an interplanetary war in which aliens can manifest themselves by possessing human bodies — avatars.

Cameron will use his own Reality Camera System to film Avatar in 3-D and plans to shoot exclusively in 3-D going forward. He's not the only one. When Wired reported on Beowulf, it explained that 3-D is staging a big comeback in Hollywood as it battles against new media and home theater systems. "[Hollywood is] struggling to dazzle a moviegoing public accustomed to multimillion-dollar computer-generated effects. This time around, a handful of blockbuster directors are driving the action: Steven Spielberg, Peter Jackson, Robert Zemeckis, and James Cameron. 'They're all feeding off each other,...they're all pushing for [3D]'."

Inevitably of course this 3D technology will also, as with digital camcorders, large size screens and surround sound, trickle down to the consumer market. And we'll be filming our home movies in High Definition 3D. Wired subsequently reported on a new technology for 3D holographic movies - "Made by Quebecois company RabbitHoles, the advertisements feature one of the film's characters tearing up the dance floor in an eight-second clip that can be "played" in 3-D by walking from left to right of the poster. Despite the images' slightly transparent quality, what you see is pretty close to the real thing."

So how long will it be before we're producing holographic home movies? Not long. Not long until we're recording, lifestreaming and projecting phantom-like images of ourselves and our loved ones. Like John Anderton reliving joyful moments with his son Sean. Like John Anderton being tormented by a digital ghost.

'Avatar' is derived from a Hindu word referring to the incarnation, or bodily manifestation, of a higher being onto planet Earth. As we move ever closer to the capability of holographic home movie-making is there a danger that vulnerable minds will freeze frame the grieving process by grasping at the illusion of virtual reincarnation? Will we be possessed by avatars? Will it be harder than ever to let go? Not accepting the death of a loved one. Leaving them neither in the land of the living nor of the dead, but imprisoned between?

In the Iron Age it was the dead who were sometimes left in limbo. In the information age it could well be the living.

Posted by: jcorbett [TypeKey Profile Page] at February 17, 2009 06:42 PM

I agree with Seth, I don't think Kurzweil is a huckster. I think he's 100% sincere in his beliefs.

Posted by: Nick Carr [TypeKey Profile Page] at February 17, 2009 11:03 PM

We're talking apples and oranges then because I don't believe in scalar-valued huckster/sincere axis. A game is a game is a game and an interpretation of a game is something else. Degrees of sincerity are just a self-interpretation in the moment - but, as method acting shows, that doesn't necessarily mean anything lasting or "of the essence" about the underlying game.


Posted by: Tom Lord [TypeKey Profile Page] at February 18, 2009 01:26 AM

put more simply:

huckster is as huckster does.


Posted by: Tom Lord [TypeKey Profile Page] at February 18, 2009 01:27 AM

If Kurzweil is guilty of anything, it being too optimistic. Technologist tend to lead you to believe that technologies like this are just around the corner when if fact they are centuries away. What he is describing will come to pass, but not until long after we are all fossils in somebody's back yard. Perhaps he’s motivated by the reality of his own mortality. As Samuel Johnson once observed, "Nothing focuses the mind like a hanging".

Posted by: Linuxguru1968 [TypeKey Profile Page] at February 18, 2009 06:28 PM


"@Barry: A curious fact about the nature of "algorithms running on the laws of physics" in the sense you use the term is that the laws of physics prevent many of those algorithms from being simulated (run) in any other way than by the one unique way they are actually run."

That assumes that the processes of e.g. the neuron are already the most efficient implementation of their function. Given how blind a watchmaker evolution is, that's incredibly unlikely - check out the circuitousness route of the vas deferens, or the inside-out design of the eye's retina (the photoreceptors are *underneath* almost everything else), for simple examples of dumbass design in mother nature.

"Simply saying that a macroscropic phenomenon is in some mathematical sense "an algorithm" *does not imply* that you can build a machine to do the same thing. Fact of life."

Actually, the very facts of life does imply that. We have an existence proof already: people (male and female pairs) have been building them since time immemorial. Surely I don't have to explain the birds and bees to you!

Evolution is so clearly such a pitiful inventor and refiner of designs that suggesting that the addition of some rationality wouldn't improve things is wholly unconvincing.

Posted by: Barry Kelly [TypeKey Profile Page] at February 18, 2009 08:06 PM

@Barry Kelly - What you're arguing is an improvement in a constant - which is quite different from exponential.

That's another key flaw, the tendency to conflate the two.

Posted by: Seth Finkelstein [TypeKey Profile Page] at February 18, 2009 09:02 PM


"Evolution is so clearly such a pitiful inventor and refiner of designs [....]"

I understand each of those words in isolation. I understand the abstract syntax of the sentence. But... you're making stuff up and talking nonsense. Those terms "pitiful", "inventor", "refiner", and "designs" don't really have much meaning in this context, if you ask me.


Posted by: Tom Lord [TypeKey Profile Page] at February 18, 2009 09:58 PM

Note also, it's entirely possible that whatever improvements are made, just means you run into some other limiting factor all the faster.

Posted by: Seth Finkelstein [TypeKey Profile Page] at February 18, 2009 10:26 PM

Seth - I have a computer science education, I am fully aware of the curves involved in constant, linear, polynomial, exponential, factorial etc. functions. I'll thank you not to assume my ignorance.

Posted by: Barry Kelly [TypeKey Profile Page] at February 19, 2009 05:17 AM

Tom - the words have meaning, and since, by your own admission, you understand the syntax, you should understand the logical structure and thereby semantics of the statements I'm making. That means that if you disagree with what I am saying, you must be able to either point to a factual inaccuracy or a logical mistake. Since you have done neither, what am I to assume?

Certainly, by talking about personifications such as "mother nature", "blind watchmaker" etc., and verbs such as "invent" and "design", I am speaking in a metaphor. However, the processes of rational design and of evolution are both searches for that maximize utility functions in solution spaces. I'm not going to apologize for using such metaphors as shorthand for describing the physical, mechanical process of evolution.

Posted by: Barry Kelly [TypeKey Profile Page] at February 19, 2009 05:28 AM

Seth - now that I read your comment again, I think you've gotten me incorrectly. When describing improvement in efficiencies in the neuron, I'm not trying to describe a mechanism for super-exponential growth. Improving the efficiency of a neuron, assuming that some kind of phase change didn't occur, would not result in increased intelligence and thereby growth - it would just result in increased speed of thought as measured objectively.

Just focusing on neurons and their peculiar inefficiencies is to focus on too low a level. It's the algorithm that's important; applying the algorithm to itself is the key behind the super-exponential improvement.

Neurons are just a physical implementation of the algorithm. Stressing the "mechanicality" of neurons is just a way of preempting the anti-AI arguments of those ghost in the machine types.

Posted by: Barry Kelly [TypeKey Profile Page] at February 19, 2009 05:44 AM

Things which are syntactically and by word choice metaphors aren't always meaningful. I think it's the case here that you've got some non-meaningful metaphors going.

For example, with respect to some "things" (let's call them "artifacts") we might try to talk about the act of "inventing" those artifacts and of the "design" that they represent.

Well, these metaphors are only sensible if there are some good analogies to be found there.

When we talk about inventing and designs in a non-metaphorical way there are some essential elements there. "Inventing" is an act of problem solving that yields a "design". A "design" is an abstract conception of the essential properties of certain artifacts. The artifacts themselves then realize the design, along with having incidental, inessential qualities.

Looking at, for example, a "species" I see nothing usefully analogous to a "design" and thus nothing to suggest its useful to thinking about "inventing" having taken place.

One likely suspect candidate for what constitutes a "design" could be, I guess, the genomes of that species. This doesn't really work out very well, though. Genes play a role in the form and function of biological artifacts but they don't determine the essential characteristics. Rather, life forms are fully determined by a whole complex of feedback systems both within cells, between cells in multi-cellular forms, and within the environment. The information that defines the essential characteristics of a life form is not found solely in the genetic sequence: it is scattered throughout the cells, between the cells, and in the environment. There's no "design" to isolate. No "design" or anything like it really exists.

Have you ever played with video feedback, especially using an analog video camera? You know, hook the camera up to a TV, point it at the screen, and turn it upside down? The result you get is a chaotic system that manages to float quasi-stable, recognizable patterns. You can even screw around with the camera or stick fingers in front of the lens to "influence" the visual patterns on the screen - but in most set-ups the equipment is noisy enough and/or performing computations rapidly enough that you can't really control the image much - not with any intentionality behind what you do. The "artifact" here - the image on the screen - has no "design" and was never "invented".

Biological life, I submit to you, is much more like the quasi-stable images on that feedback screen only, instead of a single, fairly simple feedback loop, life forms are the quasi-stable artifacts that emerge from many, many, many more feedback cycles than you can conceive of easily, all interacting to produce a chaotic system with some quasi-stable attractors (like the flower in your yard or your neighbor, Fred).

Rationality and applied intelligence have a place there, for the goals of making life better from a human perspective. For example, we learned agriculture: a pretty hands-on, brute-force way to shape some of those feedback cycles en masse to achieve a human aim.

It doesn't follow from that that you can get to broad-sweeping conclusions like "blood vessels over photo-recepters is dumb" or that you can expect to do much better. It doesn't follow from the possibility of human *influence* that it's suddenly a wise idea to tinker with, say, the matrix which is the planet's genetic heritage.

Indeed, the very nature of life's systems - their feedback-originating chaos - suggests quite the opposite. It suggests that the rational thing is to not be to eager to perturb systems that appear to be essential to the quasi-stability known as, for example, humans.

Also: Interesting phenomena which are "too complicated to simulate" aren't, it turns out, the exception - they're the rule (which has some exceptions).


Posted by: Tom Lord [TypeKey Profile Page] at February 19, 2009 03:28 PM

Tom & Barry:

When you start applying words like "algorithm" to phenomenon observed in the universe or refer to evolution as a "inventor" or "refiner", you are becoming closet creationists. This type of language implies a creator or at least something like creative design which undermines the scientific method. Most scientists view the universe as self-organizing from the sub-atomic/big bang singularity upward not a machine designed by God downward.

You don't need a cosmic programmer to create an algorithm to keep the planets orbiting the sun even though the mind might create one with a mathematical model. This idea of blind self organization is much better explained with something like Von Neumann's cellular automata. Stephan Wolfram's book A New Kind of Science discusses this kind of approach which might be a new lead in the "theory of everything" since obviously string theory is dead.

Posted by: Linuxguru1968 [TypeKey Profile Page] at February 19, 2009 04:36 PM

Linuxguru - evolution is most certainly an algorithm - nondeterministic, but an algorithm nonetheless. Simplified variants can be usefully used in software.

The execution of an algorithm doesn't need a creator, external programmer or other deus ex machina, and I never suggested it did. I strongly reject your assertion that this implies a creator; I actually believe that it is not possible to assert otherwise, since if you don't accept that physics can implement algorithms spontaneously without design, then you cannot believe that human rational thought is possible without some kind of external intervention.

That is to say, if you deny that evolution is an algorithm, you are the closet creationist.

Posted by: Barry Kelly [TypeKey Profile Page] at February 19, 2009 09:42 PM

Tom - "It doesn't follow from the possibility of human *influence* that it's suddenly a wise idea to tinker with, say, the matrix which is the planet's genetic heritage."

I understand your cautious Luddism, but I also understand that progress generally confirms the expectations of the pessimistic in the short term, but the optimistic in the long term. I'm not pessimistic about human ingenuity in the long run, though there will always be mistakes along the way. The "tinkering" may be slowed, but it will never stop so long as the curious still live.

However, the focus on genetic meddling and other political and religious hot-button topics - particularly the Green religion - is a bit of a red herring. I would expect that the tinkering, per se, is not strictly necessary. As to how close any possible Singularity could be, I suspect that at this point it's software that's the problem, not hardware. Figuring out the algorithm - and I persist in using the term - as implemented by the human mind would help, and wouldn't require meddling, only (very) close observation.

Oh, and I used "design" mostly as meaning the accumulated results (noun) of the process of evolution (verb). I don't disagree with the meaning of your objection when the word is interpreted in a more anthropomorphic sense. I'm sorry if my metaphor clouded the waters too much.

Finally, about the "interesting phenomena" - if you take the position that "interesting" relates strictly to the phenomena with most information (Shannon entropy), then be my guest to study random noise! Usually, humans take more interest in arrangements that occur more than once - i.e. patterns - which necessarily contain less information than their physical instantiations. Simulations in such cases can be abstracted (i.e. dropping the unnecessary information) and are often amenable to symbolic interpretation without losing the essence. The very facts that our brain is quite similar throughout in substance, and that brain damage can be compensated for by nearby areas, are strong suggestions that the essence is quite a lot smaller than e.g. an atomic-level view would take.

Posted by: Barry Kelly [TypeKey Profile Page] at February 19, 2009 10:07 PM

@Barry: whatever.

Posted by: Tom Lord [TypeKey Profile Page] at February 19, 2009 11:58 PM

@Barry Kelly - I wasn't claiming you didn't know the difference between constant and exponential - but rather, the argument that silicon intelligent would be better in terms of designs improvements is in fact AT MOST a constant, and not exponential - if that (i.e. even if there are improvements, those improvements might not have maximum effect because of bottlenecks elsewhere).

When you say - "applying the algorithm to itself is the key behind the super-exponential improvement." - yes, but why should it be assumed that this can be done? Implicitly, it assumes the conclusion. Perhaps it can't, or is difficult enough so that it doesn't become exponential. Certainly that seems very much the case in all prior experience.

This is what I call a rhetorical "burden of proof" problem. Proponents should not be able to hand-wave about "applying the alogrithm to itself", and require opponents to give a detailed refutation disproving it. The burden of proof is on them to do more than sketch stories.

Posted by: Seth Finkelstein [TypeKey Profile Page] at February 20, 2009 01:50 PM


By definition an algorithm IS deterministic; for
a given set of inputs there will always be a single output everytime the algorithm runs. In contrast, a hueristic takes a single set of inputs and may result on differing outputs based on assumptions about other variables.

Certainly physical systems like DNA replication, protien synthesis and planetary motion involve cycles that can be mathematically described. However, using terms like algorithm that are ussually associated with intelligently created artifical systems implies a creator or programmer who sets up the laws and puts the system into motion. I think this kind of language (although it sells books) undermines the pure scientific method.

Posted by: Linuxguru1968 [TypeKey Profile Page] at February 20, 2009 04:36 PM

Perhaps it can't, or is difficult enough so that it doesn't become exponential. Certainly that seems very much the case in all prior experience.

Posted by: oyunlar oyunlar [TypeKey Profile Page] at October 11, 2009 01:17 PM

Post a comment

Thanks for signing in, . Now you can comment. (sign out)

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Remember me?

carrshot5.jpg Subscribe to Rough Type

Now in paperback:
shallowspbk2.jpg Pulitzer Prize Finalist

"Riveting" -San Francisco Chronicle

"Rewarding" -Financial Times

"Revelatory" -Booklist

Order from Amazon

Visit The Shallows site

The Cloud, demystified: bigswitchcover2thumb.jpg "Future Shock for the web-apps era" -Fast Company

"Ominously prescient" -Kirkus Reviews

"Riveting stuff" -New York Post

Order from Amazon

Visit Big Switch site

Greatest hits

The amorality of Web 2.0

Twitter dot dash

The engine of serendipity

The editor and the crowd

Avatars consume as much electricity as Brazilians

The great unread

The love song of J. Alfred Prufrock's avatar

Flight of the wingless coffin fly

Sharecropping the long tail

The social graft

Steve's devices

MySpace's vacancy

The dingo stole my avatar

Excuse me while I blog

Other writing

Is Google Making Us Stupid?

The ignorance of crowds

The recorded life

The end of corporate computing

IT doesn't matter

The parasitic blogger

The sixth force



The limits of computers: Order from Amazon

Visit book site

Rough Type is:

Written and published by
Nicholas Carr

Designed by

JavaScript must be enabled to display this email address.