Do Brains Make Minds? (腦髓等於精神嗎?)
[創意組織 ]
(2003/08/25)
Do Brains Make Minds?
WHAT are you thinking about right now? Perhaps you're deciding whether to continue reading this book or to pay a bunch of bills gathering on your desk. Where are you thinking that thought? In your brain? In your mind? This is the crux of the mind-body problem: What is the relationship between the thoughts in our minds and the brains in our heads? This is one of the fundamental issues in philosophy and it has enticed philosophers for centuries. Is gray matter all that matters, or is "mind stuff" different in kind from "brain stuff"? Is there something unique and nonmaterial about the human mind, something not crammed into our craniums? Today, the relationship between brain and mind is the subject of intense scientific debate. What is special about the human brain compared with, say, the brains of chimps or dolphins? Or compared with the artificial brains of computers? Modern brain research--that is, neuroscience--provides a deep understanding of our processes of cognitive thought, sensory perception, emotional feelings, and behavioral actions. But can neuroscience explain love and hate, ambition and altruism, music and art? Can neuroscience solve the mind-body problem? We have five expert views.
PARTICIPANTS
Dr. Barry Beyerstein, a brain scientist, is a professor of neuropsychology at Simon Fraser University in Canada. Barry is a skeptic who does not believe in anything nonphysical.
Dr. David Chalmers is co-director of the Center for Consciousness Studies at the University of Arizona. Dave believes that correlations between brain states and mental events do not prove that brain causes mind.
Dr. Marilyn Schlitz is an anthropologist and parapsychologist at the Institute of Noetic Sciences. Marilyn asserts that we can have experiences outside the brain.
Dr. John Searle, the author of many books about the mind such as Minds, Brains and Science, is the Mills Professor of philosophy at the University of California, Berkeley. John focuses on the problem of how the brain causes experiences.
Dr. Fred Alan Wolf, a theoretical physicist, is the author of The Dreaming Universe: A Mind-Expanding Journey into the Realm Where Psyche and Physics Meet. Fred speculates that reality is more spiritual dream than physical manifestation.
ROBERT: Barry, you're a materialist who believes that only the physical is real. Does that mean you believe that the mind is the output of the brain, just as urine is the output of the kidneys?
BARRY: The brain and the kidneys are both physical organs. Both have anatomical structures and physiological processes that generate particular things. And, yes, the output of one is urine and the output of the other is thought.
ROBERT: John, you're one of the leading philosophers of mind. Your book The Rediscovery of the Mind helped to return the mind to the front burner of intellectual inquiry. How do you assess the increasing confidence--some might call it arrogance--of neuroscientists like Barry, who are virtually asserting that they have solved the mind-body problem?
JOHN: Well, I don't detect any arrogance--though I'm sorry that Barry's so down on kidneys. But I do think he would agree that we have a long way to go in understanding how the brain works. Most of the neuroscientists I know are very cautious about the progress we've made in understanding the brain, and in fact progress has been very slow. It's laborious to try to understand how the brain does anything. It may even be a little overoptimistic to state that we now can explain sensory perception, much less emotions. We don't really understand how perception works. We can more or less track the visual system from the back of the eye through the midbrain to the cerebral cortex at the rear surface of the brain. Maybe we can figure out what's going on in the midbrain, but when we get to the visual areas of the cerebral cortex, though we can relate certain simple perceptions to neuron function, unifying these perceptions into visual awareness gets to be mysterious.
ROBERT: Any solutions here to the mind-body problem?
JOHN: There are really two mind-body problems. One is the overall philosophical question--What are the general relationships between the mind and the brain?--and I think we can now say what those are: brain processes cause mental states and mental states are realized in the brain. But the second mind-body problem is what Dave [Chalmers] calls the hard problem--How exactly does it work? How do brain processes cause mental states?--and we don't know the answer to that.
ROBERT: Marilyn, you're director of research at the Institute of Noetic Sciences and have conducted some of the leading experiments in parapsychology. And you've produced strong, if controversial, evidence for rather startling abilities of the human mind to apprehend images in ways not explainable by neuroscience. How do you assess the claims being made by many scientists that the mind is strictly the physical output of the brain and nothing more?
MARILYN: I would take the position of a radical empiricist, in that I am driven by data not theory. And the data I see tells me that there are ways in which people's experience refutes the physicalist position that the mind is the brain and nothing more. There are solid, concrete data that suggest that our consciousness, our mind, may surpass the boundaries of the brain. So I think it's important that we keep a balanced perspective.
ROBERT: Dave, your book The Conscious Mind makes the controversial case for "mind" and "consciousness" being a primary element of reality, like mass and energy, and not an epiphenomenon, or secondary phenomenon arising meaninglessly from the brain. What would it take for you to reverse your position--change your mind, as it were--and discard the mind as a primary element of reality and realize that you should have been a good old materialist all along, like Barry [Beyerstein], believing that mind is just the output of the brain as urine is just the output of the kidneys? What kind of data would you have to see?
DAVE: Well, I started out life as a materialist, because materialism is a very attractive scientific and philosophical doctrine.
ROBERT: Materialism is the philosophical position that only the physical is real, and anything else, like mind or consciousness, are just artifacts or illusions. What you can't know through the normal senses cannot exist.
DAVE: Brain research is going to give us better and better correlations between states of the brain and events in the mind. That's what we're seeing now; it's beginning to happen. We find these kinds of strong correlations in many areas. Take the visual cortex, which is associated with certain kinds of visual experiences. Areas of brain function and different states of consciousness are indeed coming together. But finding correlation is not the same as finding an explanation, a reduction of mind to brain.
ROBERT: I take it you mean that correlation is not cause. Correlations of brain states with mental events can't reduce the mental to the physical. To claim that there is nothing in the mind not generated by something in the brain would be a philosophical leap too far.
DAVE: To truly bridge the gap between the physical nature of brain physiology and the mental essence of consciousness, we have to satisfy two different conceptual demands. It's not yet looking very likely that we're going to reduce the mind to the brain. In fact, there may be systematic reasons to think there will always be a gulf between the physical and the mental.
ROBERT: Are you saying that neuroscientists will never be able to bridge the gap between mind and brain? Is there no evidence that can be discovered or produced that would convince you that mind and consciousness are just the output of brains and brain cells? Are you saying that such proof is impossible?
DAVE: Well, I think all the evidence is going to be about correlation, not about cause. So we're going to have input/output, if you like--input to the brain, output to the mind. But the really interesting question is, "How do you get from input to output?".
ROBERT: I want to push you; this is fundamental. Are you saying that it's logically impossible for any data in brain research to make you change your mind and accept materialism?
DAVE: All the data are about correlation. The question of whether correlation, however strong, is in itself an explanation or reduction isn't a scientific question; it isn't an empirical question. It's strictly a philosophical question.
ROBERT: So, again, you're determining that it's philosophically impossible for brain to explain mind, for mind to be reduced to brain? Brain research can never make you a materialist?
DAVE: Brain research is providing more and more data about the correlations. But how you interpret the data will always be a philosophical question.
ROBERT: Barry, are you more open-minded than Dave? Could you envision any data that could make you reject materialism, the belief that only the physical is real? Could any imaginable evidence convince you that radical physicalism is not the right description of the world, but that there is something more to the human mind than what resides in the human brain?
BARRY: Actually, I agree with Dave [Chalmers]. At its heart, the mind-body problem is a philosophical question. Yet I go back to what Gertrude Stein said: "Difference isn't a difference unless it makes a difference." I can't see anything that we need to bring in from the outside to explain anything in neuroscience. I'm going to push the materialist position as far as it will go. It's conceivable that someday I could come up against something that doesn't fit the neuroscience model of mind, and if that happens then I'll have to change my mind.
ROBERT: Fred, as a physicist, you've written books on what you call "the spiritual universe" and "the dreaming universe." What convictions do you have that don't fit Barry's worldview of the strictly neuroscience model of mind?
FRED: Almost nothing fits. In many ways, I agree with Dave [Chalmers] that there's really no way that materialism is going to explain consciousness. Sure, they're correlated--it's a necessary correlation, just like an automobile can go from one place to another because there's a driver inside. But I see reality differently. Reality to me is more like a dream--I see a dreaming reality. I envision a dreamer, or a great spirit, of which we're all a part. Reality as a dreamer dreaming a dream. And I think that using this model we can achieve some real scientific breakthroughs, rather than attempting to reduce everything down to the simplest level.
ROBERT: John, does this sound like a ghost in the machine to you?
JOHN: I think this whole debate so far is totally misconceived, and I can't resist saying a little bit why. Of course we're going to find correlations, just as we did with the germ theory of disease. But then the next step--again, just as with the germ theory of disease--is reduction, to find out cause. [Ignaz] Semmelweis in Vienna, with his obstetrics patients, first found a correlation; then he found causation. First you find a correlation, then you find a causal relation and a causal mechanism. Now, this is precisely how we're going to do it in brain research. Once we move from correlation to cause in neuroscience, then all these old-fashioned categories, like materialism [only the physical is real] and dualism [some nonphysical entity is needed to explain mind], will fall by the wayside.
ROBERT: Will states of the mind ultimately be reducible to states of the brain?
JOHN: No, but for a kind of trivial reason. Consciousness is not going to be reducible to brain states because it has a first-person ontology, by which I mean that consciousness exists only from the point of view of some agent or organism that experiences it. In this sense, states of the mind are subjective, while states of the brain are objective. So we can't get a reduction of mind to brain in the classical philosophical sense, but we can still get a solid, satisfying scientific explanation. That's all I think any of you guys really want.
ROBERT: Marilyn, do you agree with me that John is a closet materialist?
MARILYN: I'm not sure he's in the closet at all; I think he's out in the open. For me, there's a compelling body of data [from parapsychology and extrasensory perception research] suggesting that we can supersede our brain--that we can move our awareness, our sense of self, out into the world beyond our bodies, in ways that are not reducible to states of the brain. If we're ever going to have a complete science of the mind and brain, this extraordinary data will have to be accommodated by the neuroscience perspective. I don't know what we'll end up with.
ROBERT: I hear Marilyn saying something remarkably strong. If neuroscience ever hopes to form a true picture of reality, of how mind and brain constitute consciousness, it has to include data from parapsychology and allied fields. Science as it is currently constituted will never get there.
MARILYN: A complete science has to speak to all the data, including the internal sense of everyday experience, rather than assuming that we can fit everything into a purely physical scheme that simply reduces mind to brain. A reductionist model just doesn't include all the data.
JOHN: Well, I hear Marilyn saying something even stronger than that--namely, that you can have experiences outside the brain.
MARILYN: Yes.
JOHN: I don't see any evidence of that. We have our hands full trying to figure out what goes on in the brain. If I had a theory of how the brain causes experience, I would feel that that was a pretty good day's work. Then, if somebody wants to go and figure out how there can be experiences outside the brain, OK, but that's for tomorrow.
ROBERT: Why should we wait? If you feel there's even a remote possibility that we can have experiences outside the brain, and perhaps collect quantifiable, scientifically determined evidence to verify this claim, then your whole approach to the mind-body problem must suddenly shift. You have a radically new subject. Aren't you postponing what could be revolutionary?
JOHN: If you had some really conclusive data, sure. But there's nothing in the neuroscience literature offering conclusive data for out-of-body conscious experiences. You don't want to exclude the possibility a priori, but if I'm a neuroscientist with a job to do, I'm going to spend my time figuring out how the brain does it. And if somebody can then give me solid data demonstrating that there's stuff going on outside the brain, that's terrific. That would mean there are diseases that aren't caused by the germ theory.
ROBERT: I agree that scientists are more likely to do good science by remaining in their own disciplines. If you're a pathologist, you're best off staying within pathology. Neuroscientists are no different. But science sometimes requires a few fearless souls--no metaphysics intended--to step outside the common order and risk failure, even ridicule. What Marilyn is saying, and Fred as well, is that there's a whole world of consciousness outside neuroscience, and that unless you consider this data, you're not going to truly understand the nature of the mind or the construction of reality.
JOHN: Let the people who are absolutely convinced that they have solid data for out-of-body conscious experiences do the research on what they think are worldview-changing occurrences. But those of us who have a well-defined research project--namely, how the hell does the brain do it?--should concentrate on this vital, scientifically clear work. We know that the brain does it; let's figure out how. After that, if you believe that you can corroborate mental stuff going on outside the brain, then fine. As for me, I'm very skeptical about it. I've never seen anything that's even remotely supportive--but let's keep an open mind about it.
ROBERT: Do I sense a slight pejorative tinge in your phrase "out of body?"
JOHN: That's what Marilyn is talking about, right?
MARILYN: If we want to accommodate the full nature of human experience, and to fully understand who we are as unique human beings, then we have to move out of this box that limits our inquiries--we have to move beyond the easy questions. We have to expand our search to include the personal, introspective observations that people make in every culture, every day of their lives.
FRED: I just want to say that there's an assumption here that John is making--and all of us are, to some extent--that the subjective "I" is within the body. This is not a clear evidential statement. It seems obvious, but it's absolutely not provable. You cannot scientifically prove that your "I" is in your body. There's no scientific evidence for that.
BARRY: Look, if I manipulate your brain [give you coffee, alcohol, drugs], your consciousness is going to change.
FRED: That may be, but you don't know that.
JOHN: The point I'm making is that the "I" that I live with is in my body.
FRED: You don't know that.
JOHN: Well, "I" see.
FRED: You believe that's true.
JOHN: I wake up in the morning and there isn't any question whose body this "I" is in. If I can figure out how that mechanism works, that would be terrific. Whoever can explain that [how one's mental sense of self is formed from the billions of neurons in one's head] should get the Nobel Prize.
FRED: What if you were to wake up in another reality? What if you suddenly realized that your "I" was in an alternate reality.
JOHN: Terrific! Have you ever had that experience?
ROBERT: Even if Fred's "I" has had such alternative-reality wakings and realizations, the easiest explanation is that Fred was just dreaming. (Fred likes to dream.) That's what dreams are--sensations of alternate realities which are artifacts constructed by states of the brain, usually while you're asleep.
FRED: That's not the point. As long as you have a paradigm you're always going to try to define things within that paradigm. What I'm saying is, your paradigm ain't big enough. We need to go beyond the egocentric, "I"-centered worldview that a subject exists only in a body.
ROBERT: Let's come back down to bodies on earth. Let's talk about comparative anatomy--specifically, how the human brain compares with the brains of various animals. If brain is the sole cause of mind, we should be able to plot some additional data points by investigating the correlations between mind and brain in other species. Barry [Beyerstein], describe briefly the relationship between the human brain and the brains of chimps or dolphins.
BARRY: The basic floor plan of the mammalian brain is remarkably similar, from the human brain down through the rest of the mammalian chain. But what differentiates human brains from the brains of other animals is probably the most interesting part--and, I think, one of the most profound pieces of evidence in favor of the idea that the brain is the organ of consciousness. If you compare brains, as comparative neuranatomists and evolutionary biologists do, what you find is that as the brain develops--gets larger, more complicated, and more interconnected--new mental processes emerge that didn't exist prior to that. Take the dolphin brain and the human brain, for instance. Huge parts of our brains--that is, the higher sections of the cerebral cortex--are devoted to vision, which is to be expected, since vision is our primary sense. In dolphins, brain areas devoted to vision are relatively smaller--which is again to be expected, since vision is not the primary sense of dolphins, whose environment is the ocean. Dolphin brains have a larger area devoted to hearing, since dolphins live in a world requiring a three-dimensional auditory sense, which they use for active echolocation; and their brains are structured accordingly, with huge areas devoted to that.
ROBERT: How about the relative size of the dolphin brain compared with the human brain?
BARRY: The dolphin brain is a little larger. So size isn't everything.
ROBERT: Would you say a dolphin is conscious?
BARRY: Yes, I think so. There's much good research on higher mental processes in dolphins, such as problem-solving.
ROBERT: Let's review the facts. The dolphin brain is larger than the human brain, with more auditory than visual territory. In this context, what about the output of the dolphin brain versus the output of the human brain, say, in terms of social accomplishment or mental activity? Doesn't there seem a mismatch here, a disconnect? Either dolphins are a whole lot smarter than we think or there may be something really interesting going on the human brain. Fred, what's your feeling about this?
FRED: This is a very difficult question.
ROBERT: Should I ask you only easy ones?
FRED: We have thumbs, which allows us to manipulate the world better than dolphins can. And we have consciousness in our thumbs.
ROBERT: Now you sound like Dave [Chalmers]--seeing consciousness in lots of strange places.
FRED: I don't believe that consciousness is limited to the brain. I think there's consciousness in the body. Whatever consciousness does, it's adapted in the dolphin to form that kind of entity. It's not that a dolphin has consciousness; it's that consciousness has a dolphin. Consciousness also has a Fred Wolf, which appears momentarily and disappears--
ROBERT: Is there something special about Dolphins and Fred Wolf that spawns such attention from consciousness?
FRED: Consciousness also has a Robert Kuhn. It has a John Searle. Consciousness has it all. It seems to me that this model of reality, because it encompasses more, can help us explain something. We need to go back and look at the ways in which ancient peoples first began to think about consciousness.
ROBERT: Dave, we discussed this in a previous program, but it's worth going into here: Do you see consciousness as different in humans and animals?
DAVE: I think humans and animals have a lot in common. They all perceive, learn, remember, act on the world, in broadly similar ways. I think they're all conscious. What's different in humans is that we have language.
ROBERT: And language engenders self-consciousness?
DAVE: Language gives us a set of concepts that come along with that. Take the word "I." When we got the word "I", we got self-consciousness and also the articulated set of concepts that goes with it.
ROBERT: If language in general and the concept of "I" in particular constitute a fundamental difference between human and animal cognition, how do you account for language in humans? Especially since the brains of humans and animals are so similar and the dolphin brain is even larger than human brain.
DAVE: I think the human brain is a lot more developed. It's learned to make many fine-grained distinctions. Somewhere along the line, something happened in the evolution of our brains that gave us the ability to speak.
BARRY: The floor plan is similar; it's the small differences that distinguish the human brain from the brains of other mammals.
ROBERT: There's no mystery to language. Neuroscientists can locate where language is generated by the brain. Stroke victims can be lucid but totally unable to speak, if the traumatic insult was to one of these specific language areas.
JOHN: There isn't any question about it. For most people, language is located on the left side of the brain. Humans have specialized language areas in the brain that don't exist in other primate brains.
DAVE: Apes and parrots use language in very simple ways, and it's interesting to see them doing that. A parrot can be trained to talk. Apes use signs, and they can communicate by pointing. But none of these rudimentary activities is like the human version of complex, articulated language.
ROBERT: Marilyn, do you see qualitative differences between humans and animals?
MARILYN: What intrigues me about your question is the notion of extended capacities. There are creatures, like bats and dolphins, who have the ability to echolocate, by means of resources we don't use in our repertoire of capabilities. In a dog, the olfactory senses are highly developed. All of this leads me to wonder what capacities of the human brain are going untapped. What capabilities might allow us to actualize certain unrecognized aspects of our experience that go far beyond the constraints that the materialist box would impose on us?
ROBERT: Are you saying that the small anatomical difference between human and animal brains is precisely related to the sharp differences between human and animal mental activity?
MARILYN: I wouldn't reduce it to that.
ROBERT: But that's what everyone else seems to be saying.
MARILYN: I simply don't know. Ultimately, at the end of the day, if it turns out that we can reduce it all to the brain, I would say, "Fine!" My point is that we really don't have enough information about what the capacities of the brain are to understand how a purely physical model of reality would accommodate such a broad range of human experiences, including parapsychological phenomena such as out-of-the-body experiences. We don't know the potential of what our experiences might become if we could really harness and utilize our brains.
ROBERT: But John, you're saying that the fact that the human brain has a language area is the one key factor that differentiates human beings from animals?
JOHN: Well, there are other differences, but if you had to say in one sentence what the difference is between humans and animals, as far as consciousness and mental life is concerned, it's language. Once you have language, you can get all kinds of experiences you can't get otherwise. Animals can have pair-bonding, but they can't, in our sense, fall in love. They can't have a love affair, because for that you need a vocabulary. They can't suffer the angst of postindustrial man under late capitalism. Now, I have that angst all the time, but I couldn't have it without language.
MARILYN: You haven't met my dog.
JOHN: The point is that the ability to structure experience linguistically gets you a kind of self-referential capacity. That is to say, you can have words that refer to the emotions of which that word is a component part. As a French philosopher said, "Very few people would ever fall in love if they had never read about it." Nowadays, you need to see it on television or in the movies. In order to fall in love, you need a vocabulary--a whole scenario--that goes with it. And this is true, cross-culturally, of all human beings.
ROBERT: So you're reducing the fundamental difference between humans and animals to language?
JOHN: It's not "reducing."
ROBERT: I'm trying to make you a reductionist.
JOHN: No, I'm not a reductionist. It's an extension. What I'm saying is....I love Ludwig--that's my dog. He's wonderful and we communicate well. But when it comes to doing philosophy, poor thing, he can't even keep up with me. To do philosophy, you have to be able to talk.
MARILYN: But we shouldn't privilege one way of knowing over another. It's obvious that there are things that are unique about human abilities as compared with those of dogs or birds or ants. But in the same way that our human differences make for a more interesting human soup, the multiple ways of knowing among species add to the repertoire of what makes life so interesting and rich. It's really about the diversity of ways of knowing. And humans are not necessarily superior on this kind of social evolutionary ladder to dogs or any other creature.
JOHN: If Ludwig could talk, the first thing he'd say is, "How come you humans can't smell at all?"
MARILYN: Exactly.
ROBERT: I think all of you are being too politically correct.
JOHN: I'm accused of that?
BARRY: Well, in terms of mental output, the difference between humans and animals far exceeds the small anatomical and physiological differences we see in their respective brains. And I think that's not been explained.
JOHN: Human and animal differences are huge, but they're made possible by those anatomical differences.
ROBERT: That's a philosophical point of view, not an experimental one.
JOHN: We have people who've suffered damage to parts of their brains, and they go back to existences that are like Ludwig's.
ROBERT: Yes, but humans who can't see or hear are just as conscious, and can be just as literate, as anyone else.
DAVE: Here's a better difference. Let's see what humans can do without the aid of culture. A lot of what makes us as smart as we are is the cultural apparatus and substructure that we've built up in our society. Culture is largely made possible through language and through communication. With culture, we're all smart. We can stand on the shoulders of our ancestors and we can see everything. Now suppose you bring me up in the wild, without culture, without much in the way of language, then you'll see how smart I am intrinsically.
JOHN: Is this a personal confession?
DAVE: None of us would be anything much without culture.
JOHN: When we say "language," we don't just mean making noises through our mouths, but also things like money, property, marriage, government. The stock market. Interest rates. Congress. Elections.
ROBERT: You mean that all of these things that are culturally constituted or socially instituted can be traced back down to language?
JOHN: Absolutely. You can't have any of them without language. The capacity for expressing yourself in spoken words and storing linguistic data in written words is a tremendous revolution. Given that, culture becomes possible.
ROBERT: In philosophical terms, then, language is necessary for the development of human thought and collective human culture--but is it sufficient?
JOHN: No, but we have constructed language in such a way that it's sufficient for us. By itself, language is not sufficient. But as we have evolved thousands of years from the earliest forms of language, we have simultaneously evolved human civilization in all its color and variety. Language alone isn't sufficient; you've got to have this development. It's a kind of bootstrapping effect that we've used to build human culture--all done with our marginally better brains.
ROBERT: So we have animals and humans on the same general spectrum of consciousness. Now consider computers--massively parallel supercomputers and go out numerous generations. The big question of the moment is, Can computers become conscious?
JOHN: I've been in an argument with these people, and the short answer is no, because if you define a computer in the classic sense as a device that manipulates formal symbols--usually zeros and ones--then that by itself is not enough for consciousness and mental life, because such manipulations are a purely formal operation.
ROBERT: Why can't formal operations, at some level of complexity, generate consciousness?
JOHN: It's the difference between syntax as a bunch of symbols and semantics as meaning. There's a one-sentence proof of this. It's kind of a long sentence, but anyway....Imagine that you're the computer, and imagine a task that you don't know how to perform. I don't know how to speak Chinese.
ROBERT: Go to your Chinese Room.
JOHN: I imagine myself locked in a room, and I have a rule book in the form of a computer program that enables me to answer questions put to me in Chinese. So the Chinese symbols come in, and I look up in the rule book what I'm supposed to do in response to each symbol, and the rule book gives me other Chinese symbols. I look at a symbol that comes in, and I look up what symbols I'm supposed to respond with, and I give back those Chinese symbols as answers. To people outside the room, it might appear as if I understood Chinese. But I don't understand a word of Chinese, because all I have are the symbols--the syntax. Now--this is the point; it's the end of the sentence--if I don't understand Chinese, even though I'm implementing the program for understanding Chinese, then neither does any other digital computer on that basis, because that's all any computer can do. The computer is a device for manipulating formal symbols.
ROBERT: Dave, what do you see when you look inside a computer? Do you see syntax but no semantics, symbols but no meaning?
DAVE: Look inside a brain; you see a bunch of neurons interacting. Do you see any semantics in that? Somehow, and we don't know how, all those neurons interacting give rise to a conscious, meaningful mind. I don't see a difference in principle between carbon-based neurons, which are wet, and silicon-based chips, which are dry.
JOHN: OK, I'll tell you exactly the difference. The brain is a causal mechanism. "Computation" does not name a causal mechanism. It names a formal symbolic mechanism that can be implemented in a causal mechanism.
DAVE: A computer is a causal mechanism.
JOHN: You mentioned silicon. Computation has nothing to do with silicon. Computation is an abstract formal process that we, currently, in our backward technology, have found ways to implement in silicon. I have no objection to the idea that silicon might be conscious, but silicon has nothing to do with computation. Computation needs an abstract, formal symbolic process that we can implement in any medium whatever.
DAVE: I think the interesting thing about artificial intelligence is that what matters to the mind is not the meat [i.e., brain tissue and cells]. It's not what the mind is composed of that's meaningful, it's the patterns, the infrastructure, which that meat constructs. Replace the meat in my brain, neuron by neuron, with silicon chips [assuming that each chip is functionally the same as the neuron it replaces]. What will happen? You're still going to have a causal mechanism for mind, but it will be a different causal mechanism. Even though this neuron-by-neuron, chip-by-chip replacement has created a new silicon-based structure, [that structure] is going to be the same kind of structure and cause the same kind of results.
ROBERT: Might not this chip-for-neuron replacement transform a conscious being into your favorite zombie, who would appear to do everything that the formerly conscious being did--react, behave, and so on--but now without self-awareness?
DAVE: You mean, is my consciousness going to fade out along the way, winking out incredibly slowly as each neuron is replaced by each chip? Why should it?
JOHN: Let's look closely at what you're saying. If you had one causal mechanism, the brain, and you replaced it with another causal mechanism made of silicon, whether or not the silicon would be conscious is an empirical, factual question, not something we can settle a priori. I think that's fine.
ROBERT: Please elaborate on what you think is fine.
JOHN: I think it's fine to hypothesize that you can create consciousness in some medium other than meat.
ROBERT: To hypothesize is one thing. To be able to do it in the real world is something else. Do you think it would ever be possible to create consciousness in some medium other than brains?
JOHN: I don't think so. I think it's out of the question.
ROBERT: You don't think you can?
JOHN: My statement is a factual thesis, not a philosophy proof. The philosophy proof goes as follows: just having the formal symbols, abstract zeros and ones, by itself, isn't sufficient to guarantee the presence of consciousness.
DAVE: Any computer is more than zeros and ones. Any computer is not just symbols. It's about voltages, and chips interacting with one another--
JOHN: Computation is not defined in terms of voltage.
DAVE: Computers are more than computation, more than zeros and ones.
ROBERT: But what is the brain doing that's so fundamentally different? Maybe, in addition to electrical impulses zipping around, the brain also works by broad electrical field transmissions. Maybe it also works by bathing neurons in an information-influencing chemical soup. What is the essential difference between carbon-based brains and silicon-based computers that can cause the qualitatively enormous difference we're calling consciousness?
JOHN: Let me give you an actual example. We don't know much about the brain, but we do know a little bit about how certain drugs affect the brain. We know that if you put cocaine into your brain, it has a dramatic effect. It messes up the neurotransmitters [i.e., chemicals that transmit information between individual neurons].
FRED: This isn't answering the question.
JOHN: I'm precisely answering the question. And the answer is this: cocaine affects certain neurotransmitters--
FRED: So what?
JOHN: Now, I can do a computer simulation of that with my own home computer.
FRED: So what?
JOHN: What I'm trying to tell you is that if I ingest cocaine, it actually causes a change in my conscious state, whereas--
FRED: You don't know that. You take cocaine and you may change in a certain way. But there are people who take cocaine and don't have any experience at all.
DAVE: Who's to say that a cocaine-induced experience cannot be present in a computer?
JOHN: The point is that the formal simulation of the cocaine experience in a computer is not sufficient to give it a cocaine high.
ROBERT: Fred, you're convinced that at some point in the future a computer can become conscious?
FRED: I think it will have to be a different kind of computer.
ROBERT: A quantum computer, massively parallel, orders of magnitude more powerful than anything imaginable today. You name the computer; I don't care what kind of computer. Do you foresee a time when your computer will be a better friend than your current dog?
FRED: Not only do I see that, but I think I'll become a better human being as a result of having a better friend.
ROBERT: So you'll have a pet, a companion, which is a computer, and you'll relate to it better than to your current dog?
FRED: Maybe better. But maybe in a more expanded sense.
ROBERT: Let's ratchet up the argument. When pet computers become orders of magnitude more powerful still--say, a trillionfold--will they then become better companions than your wife? Maybe I should ask her that about you? Maybe it will take only a thousandfold improvement to replace you--just kidding.
FRED: Both my wife and I will be better companions to each other as a result of what's coming.
ROBERT: John, how does your dog Ludwig compare to computers?
JOHN: We talk about computers a lot, Ludwig and I. I think that there's no PC that's ever going to replace Ludwig. The reason is very simple: I know that Ludwig is conscious and I know that a computer is not. And this conclusion has nothing to do with computing power. You can expand the power all you want, hooking up as many computers as you think you need, all in parallel, and they still won't be conscious, because all they'll ever do is shuffle symbols. Computers don't have the causal powers of brains. So, no; no computer as currently defined is going to replace my dog, because computers aren't conscious.
ROBERT: You're looking in the brain for some causal mechanism not present in current computers. But Fred is saying there'll be different kinds of computers.
JOHN: If we change the definition of a computer, then what are the computers? If "computer" means anything that can compute--add two plus two and get four--then you and I are computers. Because we can do that.
ROBERT: But artificial intelligence, defined broadly as non-brain intelligence, will never replace Ludwig?
JOHN: Well, if you build me an artificial dog that has the same kind of power to cause doggy consciousness, then you can have an appointment. I see no objections.
ROBERT: Then you have no problems in creating artificial consciousness?
JOHN: But computation, as it's currently defined, is never going to do that. Now, Fred says, "Well, we can change the definition, get a different kind of computer." Fine, but we already have a different kind of computer--you and me.
ROBERT: But now we're talking about duplicating our brains in another physical form--in silicon, or gallium arsenide, or some new material.
JOHN: Or in whatever. Look, the brain is a machine. Whatever we know, we know that. And if by "machine" you mean a physical system capable of performing functions, then the brain is a machine. So we already have a conscious machine. You have one, I have one; it's called a brain.
ROBERT: Dave [Chalmers], do you agree with that?
DAVE: I think that what matters to consciousness is the structure. So if you took my neurons and replaced them by silicon chips, you're going to have a conscious machine. The PCs we have today aren't even close.
ROBERT: Expand computer power as much as you like.
DAVE: The PC has potential. If you can get a computer to take on any structure you like, and if consciousness is generated by structure, then by definition that kind of structure is going to eventually give you consciousness.
ROBERT: So eventually computers can become better companions than people?
DAVE: I don't know if we want our companions, or our pets, to be that smart.
MARILYN: We'll never be able to equate human beings with computers. We're not machines. I disagree profoundly with the notion that we're just physical, mechanical objects. Humans are unpredictable. We're capable of a vast repertoire of messy things called emotions. We have the potential for intense kinds of transcendent experiences that will never be within a computer's repertoire. So we need not fear our position as human beings--and that's not to say that someday computers won't be wonderful companions. Dogs aren't the same as humans, either. So down the road we may well have computers as pets.
ROBERT: I hear two radically different views flying around. The first, more popular among scientists, states that although no computer is like the brain today, if we are clever enough, and patient enough, after a certain period of time we can create John Searle's consciousness in some non-brain physical material. But Marilyn is saying that no matter how clever we are, no matter how patient we are, it's never going to happen; no manufactured physical matter is ever going to produce human-level consciousness.
MARILYN: There's also a cosmological dimension that's fundamental to this entire issue, and it relates to one's personal belief system. If the assumption is that human beings are purely physical entities, produced on an assembly line, then maybe we could equate human beings and computers. But I think that the uniquenesses and individuality of our human personalities is what makes us really vital and interesting.
ROBERT: We'll now take predictions. A hundred years from today, what is the new relationship between the mind and the brain?
BARRY: There won't be a new relationship. We'll surely know a great deal more about what goes on in the brain when any specific mental experience occurs. But a hundred years from now, there will still be groups like ourselves sitting around and fighting just as we have been fighting. And people will still hold to each of these same opposing opinions.
DAVE: We'll have a really good set of correlations between processes in the brain and thoughts in the mind--which brain systems go with which mental process. We'll have a set of abstract principles to explain the correlations. I also think that computers will have minds that aren't wholly different in kind from ours.
FRED: As we begin to meld mechanical things, so-called hard silicon reality, with physiological things, so-called soft carbon reality, the distinction between a material device like a computer and a mental device like a human being will not be as demarcated as it is now. And we'll have, I believe, clearly intelligent artificial devices.
MARILYN: As a culture, we'll become so dissatisfied with this prevalent mechanistic metaphor that has deprived us of the poetry of being unique human beings that we'll throw the whole materialistic philosophy into the trash bin. As for computers, they'll take care of day-to-day matters, giving us plenty of time to excel in those things that make us uniquely human.
JOHN: In fifty years, we will know the neurological correlates of consciousness. In a hundred years, we'll know which of those correlates are actually causal--we'll know the causal mechanisms that produce consciousness.
ROBERT: CONCLUDING COMMENT
IT seems a paradox. The more some explain mental activity in the purely physical terms of neuroscience, the more others contend that mental activity cannot be reduced solely to electrical impulses and flowing chemicals, while still others wonder anew whether minds have existence outside the physical. Can neuroscience ultimately explain all mental activity, reducing mind to brain? Or is demoting the mind a vacant boast, philosophically naive and hopelessly deficient? Does the mind maintain an independent existence--beyond the brain and outside the physical--as a fundamental, irreducible element of reality? Perhaps the answer goes far beyond us. Perhaps we are forever limited, simply because we are forced to use the mind to explain the mind--this is our enduring paradox. It's conflict like this that carries us closer to truth.
Editor's Comments:
Reductive materialists believe they are being rigorously "scientific" when they insist that "consciousness is merely the name we give to brain activity." They are gravely mistaken. The truth is just the opposite. The processes of the mind are not "reducible" to brain activity. Consciousness is fundamentally, qualitatively distinct from the mass of gray cells known as the human brain, as a result consciousness is not now and never will be "reducible" to mere brain activity.
Why is this so? For the answer we must turn to metaphysics. Metaphysics is a branch of philosophy that contemplates the nature of the Universe at a level more conceptually fundamental than physics. Metaphysics tells us that the Universe is not a uniform "atomic soup." Instead, the contents of the Universe are structured hierarchically. The Universe contains discrete levels, and displays discontinuities at each of these levels. Physics is the study of one level, the atomic level. Chemistry is the study of another level, the molecular level. Biology is the study of another level, the cellular level. Psychology is the study of another level altogether, the psychic level.
What reductive materialists fail to appreciate is that the Universe's "higher" levels are not reducible to its "lower" levels. Chemistry is not reducible to physics. Biology is not reducible to chemistry. Psychology is not reducible to biology. Each level functions differently, according to its own rules, based on its hierachical level of order. What this means is that no matter how adept one becomes at physics, mere physics will never be able to explain chemical processes. No matter how adept one becomes at chemistry, mere chemistry will never be able to explain biological processes. No matter how adept one becomes at biology, mere biology will never be able to explain psychological processes. Neuroscience, so-called, will never be able to explain the human mind. Mind can only be understood at the level of mind, not at the level of biology, not at the level of chemistry, an certainly not at the level of physics. Mind can only be understood at the level of psychology.
The folly of reductive materialism is the predictable result of self-styled "rationists'" failure to grasp this underlying hierarchical characteristic of the Universe.
-- Bevin Chu
Explanation: Do Brains Make Minds?
Illustration(s): Barry Beyerstein, David Chalmers, John Searle, Marilyn Schlitz, Fred Alan Wolf, Robert Lawrence Kuhn
Author(s): Dr. Robert Lawrence Kuhn
Affiliation: CLOSER TO TRUTH (CTT)
Source: http://www.closertotruth.com/topics/mindbrain/204/204transcript.html
Publication Date: N/A
Original Language: English
Editor: Bevin Chu, Registered Architect
No comments:
Post a Comment