With the digital computer, we have created a machine that we can program not only to help us but to trick us – the greatest of all tricksters, perhaps, because it hoodwinks us about what is most central to who we are: the nature of our thought, the way we make sense. In The Stupidity of Computers, a new article in n+1, David Auerbach describes the nature of the trickery and our complicity in it.
The dissemination of information on the web does not liberate information from top-down taxonomies. It reifies those taxonomies. Computers do not invent new categories; they make use of the ones we give them, warts and all. And the increasing amount of information they process can easily fool us into thinking that the underlying categories they use are not just a model of reality, but reality itself. […]
We will increasingly see ourselves in terms of these ontologies and willingly try to conform to them. This will bring about a flattening of the self—a reversal of the expansion of the self that occurred over the last several hundred years. While in the 20th century people came to see themselves as empty existential vessels, without a commitment to any particular internal essence, they will now see themselves as contingently but definitively embodying types derived from the overriding ontologies. This is as close to a solution to the modernist problem of the self as we will get.
If and when the Turing Test is finally passed, it probably won’t mean that computers have learned what it is to be human. It will probably mean that we’ve forgotten.
RELATED: In American Scientist, Brian Hayes provides a lucid overview of the way A.I. research has changed in recent years, using three case studies – checkers, translation, and question-answering – to illustrate the shift in strategy away from subtle mind-replication (not very effective) and toward brute-force data-crunching (sometimes startlingly effective). Stupid is smart in its own way.