No. Digital computers won’t; and in the world as we know it, they are the only candidate machines.
What does “human” mean? Humans are conscious and intelligent—although it’s curiously easy to imagine one attribute without the other. An intelligent but unconscious being is a “zombie” in science fiction—and to philosophers and technologists too. We can also imagine a conscious non-intelligence. It would experience its environment as a flow of unidentified, meaningless sensations engendering no mental activity beyond mere passive awareness.
Some day, digital computers will almost certainly be intelligent. But they will never be conscious. One day we are likely to face a world full of real zombies and the moral and philosophical problems they pose. I’ll return to these hard questions.
*****
The possibility of intelligent computers has obsessed mankind since Alan Turing first raised it formally in 1950. Turing was vague about consciousness, which he thought unnecessary to machine intelligence. Many others have been vague since. But artificial consciousness is surely as fascinating as artificial intelligence.
Digital computers won’t ever be conscious; they are made of the wrong stuff (as the philosopher John Searle first argued in 1980). A scientist, Searle noted, naturally assumes that consciousness results from the chemical and physical structure of humans and animals—as photosynthesis results from the chemistry of plants. (We assume that animals have a sort of intelligence, a sort of consciousness, to the extent they seem human-like.) You can’t program your laptop to transform carbon dioxide into sugar; computers are made of the wrong stuff for photosynthesis—and for consciousness too.
No serious thinker argues that computers today are conscious. Suppose you tell one computer and one man to imagine a rose and then describe it. You might get two similar descriptions, and be unable to tell which is which. But behind these similar statements, a crucial difference. The man can see and sense an imaginary rose in his mind. The computer can put on a good performance, can describe an imaginary rose in detail—but can’t actually see or sense anything. It has no internal mental world; no consciousness; only a blank.
Bur some thinkers reject the wrong-stuff argument and believe that, once computers and software grow powerful and sophisticated enough, they will be conscious as well as intelligent.
They point to a similarity between neurons, the brain’s basic component, and transistors, the basic component of computers. Both neurons and transistors transform incoming electrical signals to outgoing signals. Now a single neuron by itself is not conscious, not intelligent. But gather lots together in just the right way and you get the brain of a conscious and intelligent human. A single transistor seems likewise unpromising. But gather lots together, hook them up right and you will get consciousness, just as you do with neurons.
But this argument makes no sense. One type of unconscious thing (neurons) can create consciousness in the right kind of ensemble. Why should the same hold for other unconscious things? In every other known case, it does not hold. No ensemble of soda cans or grapefruit rinds is likely to yield consciousness. Yes but transistors, according to this argument, resemble neurons in just the right way; therefore they will act like neurons in creating consciousness. But this “exactly right resemblance” is just an assertion, to be taken on trust. Neurons resemble heart cells more closely than they do transistors, but hearts are not conscious.
In fact, an ensemble of transistors is not even the case we’re discussing; we’re discussing digital computers and software. “Computationalist” philosophers and psychologists and some artificial intelligence researchers believe that digital computers will one day be conscious and intelligent. In fact they go farther and assert that mental processes are in essence computational; they build a philosophical worldview on the idea that mind relates to brain as software relates to computer.
So let’s turn to the digital computer. It is an ensemble of (1) the processor, which executes (2) the software, which (when it is executed) has the effect of changing the data stored in (3) the memory. The memory stores data in numerical form, as binary integers or “bits.” Software can be understood many ways, but in basic terms it is a series of commands to be executed by the processor, each carrying out a simple arithmetic (or related) operation, each intended to accomplish one part of a (potentially complex) transformation of data in the memory.
In other words: by executing software, the processor gradually transforms the memory from an input state to an output or result state–as old-fashioned film was transformed (or developed) from its input state–the exposed film, seemingly blank–to a result state, bearing the image caught by the lens. A digital computer is a memory-transforming machine, where the process of transformation is dictated by the software. We can picture a digital computer as a gigantic blackboard (the memory) ruled into squares, each large enough to hold the symbol 0 or 1, and a robot (the processor) moving blazingly-fast over the blackboard, erasing old bits and writing new ones. Such a machine is in essence the “Turing machine” of 1936, which played a fundamental role in the development of theoretical computer science.
So: everyone agrees that today’s computers are not conscious, but some believe that ever-faster and more capable computers with ever-more-complex, sophisticated software will eventually be conscious.
This idea also makes no sense. Today’s robot zipping around the blackboard changing numbers is not conscious; why should the same machine speeded up, with a different program and a larger blackboard, be conscious? (And why shouldn’t other robots executing elaborate programs to paint cars or slice chickens have the same sort of consciousness?)
Digital computers will never be conscious. But what about intelligence? Could we build a zombie using a digital computer?— an entity or robot that is unconscious but nonetheless able to think, talk and act like a human?
*****
The tricky part here is the nature of thought and the cognitive spectrum. Sometimes (when you are wide-awake, mentally alert) you think analytically. But as alertness falls, your thought becomes less focused and abstract, your tendency to drift or free-associate increases, and the character of thought and memory changes.
Every day we pass from the sharply focused reds and oranges of analytical thought through the lazier, less exhausting yellows and greens of habit and common sense and experience into the vivid blue of uncontrolled thinking, free association–and, finally, into the deep violet of sleep and dreams.
Partway down the spectrum, as you pause and look out a window, your thoughts wander. They move “horizontally” instead of straight-ahead in a logical, analytic way. But as you lose the ability to solve problems using logic and abstraction, you gain the capacity to solve them by remembering and applying earlier experiences. As your focus drifts still lower and you approach sleep, your loss of thought-control and your withdrawing from external reality progress. At the bottom of the spectrum, on the brink of sleep, you are free-associating.
It follows that your level of “focus” or “alertness” is basic to human thought. We can imagine focus as a physiological value, like heart rate or temperature. Each person’s focus moves during the day between maximum and minimum. Your focus is maximum when you are wide-awake. It sinks lower as you become tired, and reaches a minimum when you are asleep. (In fact it oscillates several times over a day.)
We can’t hope to produce artificial thought on a computer unless we reproduce the cognitive spectrum. It’s an immensely hard technical problem that goes way beyond the brief sketch I’ve given here. But many years down the road, we will solve it.
******
So we arrive back at that strange independence of consciousness on the one hand and intelligence on the other. Either can exist by itself.
If we put the two together, the result is obviously more powerful than mere consciousness without intelligence. But is it more powerful than intelligence without consciousness? Are human beings more capable than zombies?
Yes, in the sense that zombies can’t imagine us (can’t grasp what consciousness is), but we can imagine them. (Zombies can’t imagine consciousness because they can’t imagine anything.)
But what practical, biological use is consciousness if a zombie and a person can, in principle, lead indistinguishable lives?
Does consciousness give the possessor some added survival or reproductive advantage? Philosophers and scientists have proposed answers; but the question remains open. If the answer is no, we’re faced with a different question: why did evolution “develop” a complex mechanism that is biologically pointless?
Obviously consciousness serves a spiritual purpose. No zombie could suffer and sacrifice for a friend on principle. The zombie could talk a good game and, if we program it right, would be thoroughly self-sacrificing. But its good deeds resemble small change handed out to the poor by a billionaire, whose actions seem like charity although they require no sacrifice of him at all.
Do the spiritual possibilities (and the many pleasures and satisfactions) opened by consciousness make up for the reality of suffering and pain? This question resembles one asked in the Talmud: would it have better for human beings had they never been created? The rabbis’ answer is, ultimately, yes: it would have been better.
But this question too remains open.
Discussion Summary
I addressed the question of building a “human” computer—using software, in other words, to build a mind, hence a mindful computer—hence a human computer. Mind has two basic aspects: thinking and feeling or (equivalent to feeling) awareness, qualitative experience, consciousness. Of course these two aspects of mind color each other deeply—like two lighthouses with their beams fixed on each other.I believe that one day we will build a thinking computer. I don’t believe we will ever build a conscious computer. We will wind up with an immensely powerful, useful and dangerous machine butnotone that is human: although it will think, it will be unconscious. It can claim to be conscious: ask if it’s conscious and it can say (indignantly) “Of course! If you doubt that I’m conscious, why don’t you doubt thatyouare? Hypocrite.” (And it walks away, sulking.) All the same, within this computer’s “mind” there is no one home; the machine is what we’ve come to call a zombie: an unconscious thinker. And there’s one other important distinction, in consequence, between humans and zombies: we can imagine what it’s like to be a zombie, but a zombie can’t imagine what it’s like to be human. In fact, it can’t imagine anything at all.
Reader’s comments covered a fairly wide range of questions and objections (and after all, my views on the topic—anyone’sviews—are highly arguable and controversial); but two important themes emerged.
The first is more important: it’s hard for many people to accept that there’s anything computers can’t do. Many people’s confidence in the power of digital computers seems unbounded. One commentator wrote “there is no reason consciousness cannot be simulated on a powerful enough computer”; others had similar thoughts.
In fact, some people’s confidence extends to the idea of treating a human as if hewerea digital computer. Some people believe that by capturing a mind in software, the mind or even the mind’s owner could (in effect) be uploaded to the Internet. “If a human being could successfully be uploaded into a machine….”; and similar comments.
It’s natural for people to be optimistic about the power of computing. It’s also natural to equate the mind with software. The mind has a puzzling relationship to the brain: it’s created by the brain, but it’s intangible and invisible. If you open someone’s head, all you see is brain cells wired together. The virtual machine created by software (such as the browser you’re using now) is created by an electronic computer, but if you crack open the processor and memory chips and look inside, you see only micro-electronics; you can’t see software. The idea that mind relates to brain as software relates to computer is natural, even self-evident.
But it’s also wrong. These comments all take an oddly cramped view of human beings. Imagine that we are somehow able to capture some particular mind in software, or (equivalently) in data that software ingests, whereupon it becomes a. Say you’re the lucky test subject: your mind has been captured in digital data: a long list of binary numbers. Can you actually believe that these numbers plus a computer are your identical twin? Is your personhood, your way of experiencing the world, yourselfas meager as that? Consider a photo of a person: it might be high-definition, even 3D, but you’d never confuse the photo and a human being. You’d never describe the photo as your twin, as another person made of different stuff. The list of binary numbers is a different sort of photo; that’s all. As for the idea some people have that someday, they will upload themselves to the internet–and thereby live forever, remember that you could crumple the list of numbers, set it on fire, toss it in the trash—makes no difference you; you don’t die when that happens. And when you do actually face death, the existence of those numbers on paper or on the internet won’t matter to you either.
The other main theme in the comments: granted digital computers will never be conscious, never be human; but what about some other form of computer, some other machine?
In the 1930s, Alan Turing and other logicians (Post, Kleene) set off to give a precise definition of “compute.” Their work led to the strong result that the computer you’re using right now—assuming you give it as much memory and time to work as it needs—can do every computation that exists, with nothing left over. A different sort of machine might compute faster or use different kinds of physical processes and materials, but there’s no fundamental way in which it’s different from the computer in front of you now. Even a quantum computer adds nothing fundamental: although its memory and speed are immensely larger than a classical computer’s, it the end it’s able to do exactly the same computations as the laptop in front of you.
I’ll finish with two questions based on these comments.
New Big Questions:
1. Why are we so willing to believe in the omnipotence of computers?
2. Why did no one raise the Judeo-Christian objection to “human” machines, that human beings are “the image of God,” and how could God’s image (any more than God Himself) possibly be reduced to a list of binary numbers? Are there so few Jews and Christians left? Or have Jews and Christians been intimidated by scientism into silence?
And why aren’t these questions crucial to Templeton?
Will Machines Ever Become Human?
Reviewed by Unknown
on
6:06:00 AM
Rating:
No comments: