Only if a "leap of faith" = knowing.
I read Ray Kurzweil's new book "How to Create a Mind - The Secret of Human Thought Revealed" (a title which has a pretty hucksterish tone, doesn't it?), and have a few thoughts.
As I've noticed before in reading him, he has a tendency to lump issues of different orders of magnitude together as if they are equal in complexity. Here's one line had me shaking my head - after discussing plans for using a Watson-like system for helping to treat human disease, on page 108 he writes "Future systems can have goals such as actually curing disease and alleviating poverty." This seems to indicate that alleviating poverty is just some new algorithm, ignoring completely issues of power and politics.
But as for the entire book, it's a bit of a mish-mash of his earlier stuff. He spends quite a big portion of the book discussing his notions around the neuroscience of the neocortex, and the idea of pattern-matching modules which have a general architecture which helps allow for plasticity of brain function. He does at least acknowledge that the Jeff Hawkins book "On Intelligence" covered some of this ground already (and I'd say in a more readable fashion). I can't say whether this view of the brain is really backed up by all the latest research, but in general it seems plausible.
But when we get to the more interesting (to me at least) questions of mind and consciousness, Kurzweil doesn't really pin things down in any way. Most of my attention was drawn to Chapter 9, "Thought Experiments on the Mind". Kurzweil writes that his view is that "consciousness is an emergent property of a complex physical system" (203) - note that he uses "physical system", not a biological system, and follows up with his belief: "By this reckoning, a computer that is successfully emulating the complexity of a human brain would also have the same emergent consciousness as a human."
The closest thing I could find to a definition of mind comes a few pages later; "I refer to 'mind' in the title of this book rather than 'brain' because a mind is a brain that is conscious." (205). So here again we find the equivalence of mind to brain, and I find this an inadequate way of thinking about the issue. I believe that humans can be conscious, not brains, and I believe that 'mind' is about subjective experience, not about brains (while acknowledging that a healthy brain is a necessary part of the equation). Whether a computer can equal a brain is not so clear either, though clearly we are making progress in modeling brain-type functions.
As for the 'leap of faith', Kurzweil uses the phrase many times. "My own leap of faith is this: Once machines do succeed in being convincing when they speak of their own qualia and conscious experiences, they will indeed constitute conscious persons." (210) I admit that there well may be machines in the future that are very convincing, but it really is a leap of faith to believe that they will have subjective experiences in some way like humans. I believe there is still plenty to be learned about human consciousness, and its relationship to the brain, regardless of the evolution of machines.
Then he has a section on free will which seems to me to build upon some of the confusion of mind-consciousness-brain. He discusses research on split-brain patients where the two hemispheres are not directly communicating, and where it seems clear that each hemisphere is capable of operating essentially independently. From this he concludes: "This implies that each of the two hemispheres in a split-brain patient has its own consciousness." (227). I don't believe that follows. The question is whether the split-brain person experiences two separate consciousnesses, and I don't believe that they do give such a self report - and given that I don't see the brain (or part of a brain) itself as conscious. On 233 he states "We consider human brains to be conscious" - but NO, I don't - it is humans themselves who are conscious.
Still in Chapter 9, Kurzweil provides a thought experiment on identity. First he describes a scanned 'copy' of a person, which is loaded onto a non-biological platform, and seems to behave just as the original. He claims, without much justification in my mind, that this copy is conscious, but also initially concludes that is separate from the original, even if very similar. This latter conclusion I agree with. I believe a copy may be very like the original, but as soon as the copy is made it is on a separate path with separate experiences (separate mind, if it has one), and thus cannot be equivalent (just as identical twins are not equivalent).
Then he describes a piecemeal 'replacement' program on a person, bit-by-bit replacing brain components with non-biological components, until finally at some point the brain is completely replaced. In this case it seems as if identity is retained, and I believe that makes sense. But he says there is a contradiction here which indicates that this replaced person is fully equivalent to the copy above, and thus the copy is (has the same identity as) the original.
I think there is a crucial difference in the two scenarios, and that is in the second scenario, identity is retained because there is a ongoing continuity of experience - indeed I think that is what is crucial about identity. Kurzweil never really defines identity, just as he never really can define consciousness either, and I think this leads to all sorts of muddy thinking. I'm not saying it's easy to define these terms, but one must make some stab at it or else the words stand for very fuzzy concepts.
In the final portions of the book Kurzweil goes over his Law of Increasing Returns (LOAR) that was in The Singularity is Near, dealing with some objections raised by Paul Allen among others. Not much new here.
To sum it up, I think Kurzweil is right about the trends in terms of increasing hardware and software power (in smaller packages), and this will indeed lead to some impressive breakthroughs. Perhaps this will include the creation of robots that seem very human. Whether or not they have minds or consciousness is certainly not resolved in this book, and we may just have to wait and see. I think his book would be better titled "How to Create a Non-biological Brain" and leave it there.
Brain Science: 17th Annual Review Episode (BS 214)
10 months ago