I’m with stupid

With the digital computer, we have created a machine that we can program not only to help us but to trick us – the greatest of all tricksters, perhaps, because it hoodwinks us about what is most central to who we are: the nature of our thought, the way we make sense. In The Stupidity of Computers, a new article in n+1, David Auerbach describes the nature of the trickery and our complicity in it.

A bit:

The dissemination of information on the web does not liberate information from top-down taxonomies. It reifies those taxonomies. Computers do not invent new categories; they make use of the ones we give them, warts and all. And the increasing amount of information they process can easily fool us into thinking that the underlying categories they use are not just a model of reality, but reality itself. […]

We will increasingly see ourselves in terms of these ontologies and willingly try to conform to them. This will bring about a flattening of the self—a reversal of the expansion of the self that occurred over the last several hundred years. While in the 20th century people came to see themselves as empty existential vessels, without a commitment to any particular internal essence, they will now see themselves as contingently but definitively embodying types derived from the overriding ontologies. This is as close to a solution to the modernist problem of the self as we will get.

If and when the Turing Test is finally passed, it probably won’t mean that computers have learned what it is to be human. It will probably mean that we’ve forgotten.

RELATED: In American Scientist, Brian Hayes provides a lucid overview of the way A.I. research has changed in recent years, using three case studies – checkers, translation, and question-answering – to illustrate the shift in strategy away from subtle mind-replication (not very effective) and toward brute-force data-crunching (sometimes startlingly effective). Stupid is smart in its own way.

One thought on “I’m with stupid

  1. Tom Lord

    The N+1 article leaves me a bit cold as does the response it drew from you. This comment is about at rough draft level but I have to get it off my chest. It goes roughly in parallel with the N+1 article so it might make sense to look at them side by side:

    In the first few sections, the N+1 author observes that strong AI is nowhere in sight. Sure, fine.

    Next the author offers a quick and dirty — and very bad — theory of language. One could write many thousands of words about the problems in this section but here are a key points. The author writes:

    Take the fairly simple sentence “I will go to the store if you do.” For an English speaker, this sentence is unambiguous. It means, “I will go to the store only if you go with me (at the same time).”

    First, the sample sentence taken in isolation is ambiguous. The author even goes on to cite alternative meanings.

    Second, it is odd to fixate on “the meaning” of a sentence and even odder to say that what a sentence “means”, in some primary sense, is another sentence. The author has presumed the existence of some thing — “the meaning” — and is already in trouble there. He then offers that an alternative sentence is “the meaning”. I’m not sure what is gained there. Does that second sentence “mean” itself or is there some third sentence waiting in the wings?

    Having set himself up a trap, the author immediately steps into it:

    Second, a program analyzing natural language must determine what state of affairs that sentence represents in the world, or, to put it another way, its meaning.

    The “meaning” of a sentence, which earlier was described as another sentence, is here suddenly a “state of affairs” which a program must “determine” to be able to “analyze” a sentence.

    Dizzy, yet? We seem to be multiplying invisible metaphysical entities without bound but not really getting any closer to artificial intelligence.

    Wittgenstein’s “language games” provides a pretty simple counter to such a view. We don’t need to posit this mysterious metaphysical thing “the meaning” or “the state of affairs”. Language is interesting in the context of its embodied use, not in relation to imaginary platonic ideals.

    Eliza, Jeeves, Google search, The Gonz — all of these programs implement primitive languages perfectly.

    The issue that inspires author is apparently that interacting with any of the programs that implement those languages is not really much like free-form interaction with a person, except perhaps in the most superficial ways.

    The main error in the section on language is that it posits as axiom this idea that to make a program more “human-like”, the program must be able to analyze sentences to discover an objective meaning, some representation I guess of the “state of affairs”. That’s not wrong so much as it’s incoherent. To analyze a sentence a program must analyze it, period. Every google search or Ask Jeeve’s query ever entered was analyzed. This incoherence gets him in further trouble:

    The author turns to Google as an example of a system that just punts “understanding human language”. Well, with our Wittgensteinian clarity we can observe that Google search defines a human language and a machine process that uses it. The specialized, simple google search human language is (by definition) perfectly understood by the automated processes that implement it for Google. It’s also understood by people who exchange queries in email, by someone reading a web server log to understand what kind of searches brought people to the site, and so forth.

    Google innovated to create a new specialized human language that blends with but can also be regarded as a distinct territory within human language generally.

    (It’s not clear what the author thinks he means by asserting that Google’s language doesn’t involve semantic analysis. He just seems confused.)

    Skipping over the section on “Money”…

    The Author turns to several examples of “taxonomies”, understood to be duals of or at least components of “ontologies”: Wikipedia classifies articles, Amazon classifies goods for sale, social networks classify us into demographics on steroids, the government and similar forces classify us by features of their surveillance.

    The author reasonably points out problematics of the processes that use this new proliferation of taxonomies, imposed by a few upon a many. They may be invasive, oppressive, overly reductive, and so forth.

    And we immediately get back into the author’s confusions when he turns to the “Rhizome”. We wind up with one of the paragraphs you quoted, Nick, with the money-shot sentence emphasis added:

    The dissemination of information on the web does not liberate information from top-down taxonomies. It reifies those taxonomies. Computers do not invent new categories; they make use of the ones we give them, warts and all. And the increasing amount of information they process can easily fool us into thinking that the underlying categories they use are not just a model of reality, but reality itself.

    Pardon me?

    I’m sorry but I’m simply not sure what on God’s green earth it might mean to be “fooled” into thinking that an “underlying category” is “reality itself”.

    Where to begin?

    For one thing, there’s nothing about these new categories that I see that makes them “unreal”. They are reality. When Amazon classifies me as someone who has recently bought talcum powder for an embarrassing itch — well, that’s really how Amazon classifies me and unless their program has a bug it’s really true that I recently bought the powder. [Note: it’s not. It’s an example. No embarassing itches at the moment, knock wood.]

    I think the author is trying to suggest that in terms of goals and self understanding and such we might come to measure ourselvs against these new taxonomies too much. We’ll spend too much time perceiving things through those lenses:

    We will increasingly see ourselves in terms of these ontologies and willingly try to conform to them.

    Ok. As Foucault observed, this kind of thing is nothing new. Innovations in the discourse of religion, in the practices of education, in the invention of the demographic as a tool of social policy formation …. these are all pre-computer areas where you can find the creation of new ontologies which, subsequently, people measure themselves against. We are formed in a mesh of power relations and how various instruments of power classify us takes on a realistic significance.

    Old news, but, sure, maybe the intertubes and interweb accelerate the pace of change or something. Technological innovations in computing certainly raise the stakes of mass classifications of people.

    But…

    This will bring about a flattening of the self—a reversal of the expansion of the self that occurred over the last several hundred years.

    At this point I’m just starting to get angry at the author because, really, what the hell does that sentence even mean? (See what I did there?)

    While in the 20th century people came to see themselves as empty existential vessels, without a commitment to any particular internal essence,

    Please make it stop.

    they will now see themselves as contingently but definitively embodying types derived from the overriding ontologies.

    My kindest guess is that the author means that people will have some awareness of how at least some other people classify them. And apparently this will be new, since the 20th century. I disagree that that’s new.

    This is as close to a solution to the modernist problem of the self as we will get.

    I would have appreciated a word or two in there, somewhere, making at least some slight attempt to define the “modernist problem of the self”.

    ———————————–

    Look, there is something good waiting to be written that’s vaguely in this area.

    Computers help classify people. That’s of significance to political power. That’s why Nazi Germany was such a big client of IBM, for example. And when the classifications involve feedback people learn to see themselves through those lenses, even if imperfectly.

    All this is important. Vitally important.

    It deserves better treatment than confused views of AI, Language, and metaphysics.

Comments are closed.