Computers think straight. People think crookedly. Despite all the frustrations that come with thinking crookedly, we have it much better than our calculating kin. Thinking crookedly is more interesting, more rewarding, flat-out more fun than thinking straight. Emotion, pleasure, art, ingenuity, daring, wit, funkiness, love: pretty much everything good is a byproduct of crooked thinking. To think crookedly — to be conscious and self-aware and kind of fucked-up — is a harder feat by far than to think straight. That’s why it’s been fairly easy for us to get machines to think straight, while we still have no idea how to get them to think crookedly.
“Certainly if you had … an artificial brain that was smarter than your brain, you’d be better off,” Sergey Brin once said. Certainly Sergey Brin was wrong. He was thinking too straight. The conscious human mind is buggy, impurely smart, and that’s its greatest feature.
Still, thinking straight, really straight, is a useful skill. After all, it provides a perfect complement to our own way of thinking. That’s why we made computers, and it’s why computers are so valuable in so many situations. For a crooked thinker, there’s nothing like being able to call on a straight thinker from time to time.
In an essay about artificial intelligence in Wired, Kevin Kelly makes an incisive point: for computers, consciousness would be a disaster — a bug-as-bug, not a bug-as-feature. What we want our AI aides to be, writes Kelly, are “nerdily autistic, supersmart specialists”:
In fact, this won’t really be intelligence, at least not as we’ve come to think of it. Indeed, intelligence may be a liability — especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in English instead. As AIs develop, we might have to engineer ways to prevent consciousness in them.
All along, our all-too-human AI boffins have been pursuing the wrong goal. If the value of our computers lies in the complementary nature of their intelligence, the last thing we’d want to do is turn them into crooked thinkers like ourselves. Who wants a fucked-up computer?