21 December 2012

Does Kurzweil know How to Create a Mind?

Only if a "leap of faith" = knowing.

I read Ray Kurzweil's new book "How to Create a Mind - The Secret of Human Thought Revealed" (a title which has a pretty hucksterish tone, doesn't it?), and have a few thoughts.

As I've noticed before in reading him, he has a tendency to lump issues of different orders of magnitude together as if they are equal in complexity.  Here's one line had me shaking my head - after discussing plans for using a Watson-like system for helping to treat human disease, on page 108 he writes "Future systems can have goals such as actually curing disease and alleviating poverty."  This seems to indicate that alleviating poverty is just some new algorithm, ignoring completely issues of power and politics.

But as for the entire book, it's a bit of a mish-mash of his earlier stuff.  He spends quite a big portion of the book discussing his notions around the neuroscience of the neocortex, and the idea of pattern-matching modules which have a general architecture which helps allow for plasticity of brain function.  He does at least acknowledge that the Jeff Hawkins book "On Intelligence" covered some of this ground already (and I'd say in a more readable fashion).  I can't say whether this view of the brain is really backed up by all the latest research, but in general it seems plausible.

But when we get to the more interesting (to me at least) questions of mind and consciousness, Kurzweil doesn't really pin things down in any way.  Most of my attention was drawn to Chapter 9, "Thought Experiments on the Mind".  Kurzweil writes that his view is that "consciousness is an emergent property of a complex physical system" (203) - note that he uses "physical system", not a biological system, and follows up with his belief:  "By this reckoning, a computer that is successfully emulating the complexity of a human brain would also have the same emergent consciousness as a human."

The closest thing I could find to a definition of mind comes a few pages later; "I refer to 'mind' in the title of this book rather than 'brain' because a mind is a brain that is conscious." (205).  So here again we find the equivalence of mind to brain, and I find this an inadequate way of thinking about the issue. I believe that humans can be conscious, not brains, and I believe that 'mind' is about subjective experience, not about brains (while acknowledging that a healthy brain is a necessary part of the equation).  Whether a computer can equal a brain is not so clear either, though clearly we are making progress in modeling brain-type functions.

As for the 'leap of faith', Kurzweil uses the phrase many times.  "My own leap of faith is this: Once machines do succeed in being convincing when they speak of their own qualia and conscious experiences, they will indeed constitute conscious persons." (210)  I admit that there well may be machines in the future that are very convincing, but it really is a leap of faith to believe that they will have subjective experiences in some way like humans.  I believe there is still plenty to be learned about human consciousness, and its relationship to the brain, regardless of the evolution of machines.

Then he has a section on free will which seems to me to build upon some of the confusion of mind-consciousness-brain.  He discusses research on split-brain patients where the two hemispheres are not directly communicating, and where it seems clear that each hemisphere is capable of operating essentially independently.  From this he concludes: "This implies that each of the two hemispheres in a split-brain patient has its own consciousness." (227).  I don't believe that follows.  The question is whether the split-brain person experiences two separate consciousnesses, and I don't believe that they do give such a self report - and given that I don't see the brain (or part of a brain) itself as conscious.  On 233 he states "We consider human brains to be conscious" - but NO, I don't - it is humans themselves who are conscious.

Still in Chapter 9, Kurzweil provides a thought experiment on identity.  First he describes a scanned 'copy' of a person, which is loaded onto a non-biological platform, and seems to behave just as the original.  He claims, without much justification in my mind, that this copy is conscious, but also initially concludes that is separate from the original, even if very similar.  This latter conclusion I agree with.  I believe a copy may be very like the original, but as soon as the copy is made it is on a separate path with separate experiences (separate mind, if it has one), and thus cannot be equivalent (just as identical twins are not equivalent).

Then he describes a piecemeal 'replacement' program on a person, bit-by-bit replacing brain components with non-biological components, until finally at some point the brain is completely replaced.  In this case it seems as if identity is retained, and I believe that makes sense.  But he says there is a contradiction here which indicates that this replaced person is fully equivalent to the copy above, and thus the copy is (has the same identity as) the original.

I think there is a crucial difference in the two scenarios, and that is in the second scenario, identity is retained because there is a ongoing continuity of experience - indeed I think that is what is crucial about identity.    Kurzweil never really defines identity, just as he never really can define consciousness either, and I think this leads to all sorts of muddy thinking.  I'm not saying it's easy to define these terms, but one must make some stab at it or else the words stand for very fuzzy concepts.

In the final portions of the book Kurzweil goes over his Law of Increasing Returns (LOAR) that was in The Singularity is Near, dealing with some objections raised by Paul Allen among others.  Not much new here.

To sum it up, I think Kurzweil is right about the trends in terms of increasing hardware and software power (in smaller packages), and this will indeed lead to some impressive breakthroughs.  Perhaps this will include the creation of robots that seem very human.  Whether or not they have minds or consciousness is certainly not resolved in this book, and we may just have to wait and see.  I think his book would be better titled "How to Create a Non-biological Brain" and leave it there.

17 December 2012

Mind moves matter!

Or at least gets neurons to fire - paralyzed woman can control robotic arm.  The Guardian story "Mind over matter helps paralyzed woman control robotic arm" by Ian Sample on 16 December 2012 reports on developments in a Pittsburgh case.

The 52-year-old patient, called Jan, lost the use of her limbs more than 10 years ago to a degenerative disease that damaged her spinal cord. The disruption to her nervous system was the equivalent to having a broken neck.


Writing in the Lancet, researchers said Jan was able to move the robotic arm back, forward, right, left, and up and down only two days into her training. Within weeks she could reach out, and change the position of the hand to pick up objects on a table, including cones, blocks and small balls, and put them down at another location.
And here's the part I especially like:
To wire the woman up to the arm, doctors performed a four-hour operation to implant two tiny grids of electrodes, measuring 4mm on each side, into Jan's brain. Each grid has 96 little electrodes that stick out 1.5mm. The electrodes were pushed just beneath the surface of the brain, near neurons that control hand and arm movement in the motor cortex.

Once the surgeons had implanted the electrodes, they replaced the part of the skull they had removed to expose the brain. Wires from the electrodes ran to connectors on the patient's head, which doctors could then use to plug the patient into the computer system and robotic arm.

Before Jan could use the arm, doctors had to record her brain activity imagining various arm movements. To do this, they asked her to watch the robotic arm as it performed various moves, and got her to imagine moving her own arm in the same way.

While she was thinking, the computer recorded the electrical activity from individual neurons in her brain.
So "thinking" and "imagining" seem to have direct physical impacts in the brain.  What more can we learn to achieve by thinking?

05 December 2012

Neuroscience Fiction

Gary Marcus in the New Yorker on brain complexity, "Neuroscience Fiction", points out that our current tools like fMRI may not really show all the important things going on under the hood:
But a lot of those reports are based on a false premise: that neural tissue that lights up most in the brain is the only tissue involved in some cognitive function. The brain, though, rarely works that way. Most of the interesting things that the brain does involve many different pieces of tissue working together. Saying that emotion is in the amygdala, or that decision-making is the prefrontal cortex, is at best a shorthand, and a misleading one at that.
He also links to a recent NYTimes op-ed complaining about the recent prevalence of 'brain porn' by Aissa Quart:
A team of British scientists recently analyzed nearly 3,000 neuroscientific articles published in the British press between 2000 and 2010 and found that the media regularly distorts and embellishes the findings of scientific studies. Writing in the journal Neuron, the researchers concluded that “logically irrelevant neuroscience information imbues an argument with authoritative, scientific credibility.” Another way of saying this is that bogus science gives vague, undisciplined thinking the look of seriousness and truth.
Perhaps this is all just an inevitable backlash against the early peak of hype.  There's a long way to go with neuroscience and actual understanding.

04 December 2012

Human Connectome Project

Along similar lines to yesterday's post, the Human Connectome Project looks interesting, attempting to create models of neural pathways.  Here's an article, "The Symphony Inside Your Head" by Dr. Francis Collins that discusses the project; an excerpt:
For some time, neuroscientists have been able to infer loosely the main functions of certain brain regions by studying patients with head injuries, brain tumors, and neurological diseases—or by measuring levels of oxygen or glucose consumption in healthy people’s brains during particular activities. But all along it’s been rather clear that these inferences were overly simplistic.  Now, new advances in computer science, math, and imaging and data visualization are empowering us to study the human brain as an entire organ, and at a level of detail not previously imagined possible in a living person.
Worth a look!

03 December 2012

Brain Simulation

The Nature article "Simulated brain scores top test marks" by Ed Yong (Nov. 29, 2012) describes a project to simulate the neurons of a brain via computer.  It's called Spaun!
Spaun sees a series of digits: 1 2 3; 5 6 7; 3 4 ?. Its neurons fire, and it calculates the next logical number in the sequence. It scrawls out a 5, in legible if messy writing.

This is an unremarkable feat for a human, but Spaun is actually a simulated brain. It contains 2.5 million virtual neurons — many fewer than the 86 billion in the average human head, but enough to recognize lists of numbers, do simple arithmetic and solve reasoning problems.
While the current version is quite slow, this seems like an interesting development!

15 November 2012

My Stroke of Insight - J.B. Taylor (2008)

My Stroke of Insight is the personal story of Jill Bolte Taylor, a brain scientist who suffered a left-hemisphere stroke and eventually recovered over eight years.  The book starts with some background on the brain, and then tells of her experience during and immediately after the stroke, which basically damaged areas of the brain that manage math and language.  Her experience of this situation was a kind of immersion in the right hemisphere consciousness, where wholeness and intuition and 'vibe' became foremost, and it was a long struggle (at times not even desired) to regain a more analytical/rational view of the world.  Today Taylor believes that the right-brain consciousness gives a feeling of oneness with the world that is easily torn down by the brain-chatter of the left-brain, and she attempts to consciously control that process.

From a NY Times story on Taylor, "A Superhighway to Bliss" by Leslie Kaufman, goes into this topic:

Dr. Taylor makes no excuses or apologies, or even explanations. She says instead that she continues to battle her left brain for the better. She gently offers tips on how it might be done. 
“As the child of divorced parents and a mentally ill brother, I was angry,” she said. Now when she feels anger rising, she trumps it with a thought of a person or activity that brings her pleasure. No meditation necessary, she says, just the belief that the left brain can be tamed. 
Her newfound connection to other living beings means that she is no longer interested in performing experiments on live rat brains, which she did as a researcher.
She is committed to making time for passions — physical and visual — that she believes exercise her right brain, including water-skiing, guitar playing and stained-glass making. A picture of one of her intricate stained-glass pieces — of a brain — graces the cover of her book.
I found this story interesting as a way of thinking about how babies begin to develop analytical skills - writing, reading and arithmetic take plenty of brain work to master, but we rarely have any sense of what that actually feels like.  This book tells that story, of an adult working through that effort a second time.

Some have been disappointed by this book because they feel it veers off into pseudo-science, and that's fair - this is a subjective account of one person's experience.  I note that no one in the neuroscience community appears to have provided a blurb for the book.  I think it's critical though to build up more knowledge of the subjective experience of mind, and how conscious thought may be used to guide/control one's own experience, informed by some knowledge of the underlying workings of the brain.

Early take on Kurzweil's 'How to Create a Mind'

Came across 'Ray Kurzweil's Dubious Theory of Mind' posted today by Gary Marcus at the New Yorker site today.  Marcus, a professor of psychology at NYU, does a pretty thorough job of tearing down Kurzweil's book, and from all I know about Kurzweil I have to agree.  As is pointed out in the posting, Kurzweil's done some amazing things in his life, but at the same time he frequently glosses over or simplifies things that deserve a lot more thought and effort.  Here's an excerpt:
At the beginning of the book, Kurzweil promises to reverse engineer the human brain in hopes of using the brain’s secrets to advance artificial intelligence, but what he’s really done is the opposite: reverse engineer his own companies’ computer systems in order to propose a theory about how the mind works.

Ultimately Kurzweil is humbled by a challenge that has beset many a great thinker extending far beyond his field—Kurzweil doesn’t know neuroscience as well as he knows artificial intelligence, and doesn’t understand psychology as well as either. (And for that matter he doesn’t know contemporary A.I. as well as the A.I. of his heyday, when he was running his companies thirty years ago.)
I'll still take a look at the book (from the library! doesn't sound like a keeper), but I'm not sure Kurzweil is adding much this time around.

I've written about Kurzweil before: see here and here.

06 November 2012

Bacteria in the brain?

Just a quick post on an intriguing tidbit in Michael Specter's Oct 22, 2012 New Yorker story, "Germs are Us" on the role of bacteria in our health (the so-called 'microbiome').  On the second page:
The passengers in our microbiome contain at least four million genes, and they work constantly on our behalf: they manufacture vitamins and patrol our guts to prevent infections, they help to form and bolster our immune systems, and digest food. Recent research suggests that bacteria may even alter our brain chemistry, thus affecting our moods and behavior.
Will have to see what more I can find out about that!

01 November 2012

Consciousness as 'display only' UI?

This post is not a review, but was triggered as a looked through the recent book "The 7 Laws of Magical Thinking" by Matthew Hutson (2012).  On page 2, in the Introduction, Hutson reviews some types of 'magical thinking':  "Do you believe that certain events were meant to happen?  Magical thinking.  Or that you can lift your arm through the power of your conscious thoughts?  Magical thinking, even that."

As many popular books do, Hutson uses the 1983 study by Benjamin Libet which compared the timing of a readiness potential in the brain to the subject's report of a conscious decision to move their arm.  His finding was that there was a readiness potential prior to the report of a conscious decision (about a third of a second).  There have been followups that report further findings along these lines.  Hutson writes:
Libet refused to interpret his own findings as conclusive evidence against free will.  He held on to the possibility of some kind of conscious veto power that could halt or redirect an act in progress - so called free won't.  But no available evidence supports such a magical intervention.
So this led me to a few thoughts on the subject.  If one accepts the notion that consciousness is produced by the brain, then surely it must be the case that any conscious 'activity' is either preceded by or accompanied by activity in the brain.  It's not as if one's brain is an independent actor - the brain and the conscious activity is all bundled into a person, and the person makes decisions.  The decision path must involve the brain.

And we know that there is plenty going on in a person's body which is not actively controlled via consciousness - like breathing and digesting and keeping the heart beating and so forth.  Likewise we don't really consciously control ourselves when making simple movements - what muscles will be involved if I move my arm?

Is it possible that consciousness is simply like a software user interface that does not allow any action or update?  Consciousness in this model would simply register activity (thoughts) in the brain, and perhaps like moving a mouse around we can shift the focus but in fact not actually alter anything via conscious decision.  And thus any notion that consciousness can 'do something' is an illusion or 'magical intervention'?  That is certainly not my personal experience, though I recognize that plenty of consciousness can be consumed by simply going around in circles, worrying about something, or churning over possibilities.  Note also that I've never seen a brain (on its own) do much of anything.  The brain is a key organ in a person!

My personal take is that this view of consciousness as 'inactive' cannot be correct, but it's hard to really get a handle on the mechanism that conscious activity can use.  I think the area of attention and learning is the most promising place to look - since to learn something we typically have to focus attention, and we go through a period where we don't fully grasp a concept or action.  In this period we have to think carefully about each step or movement, and if we are successful at learning then finally there is a breakthrough, and we 'internalize' the learning - which makes it largely unconscious.  As an example, once we know how to serve a tennis ball, we don't have to think consciously about how to move our arms and body to do it.  But while we are learning we need all sorts of mental attention to try to get it right.  If you have no mental focus, it seems unlikely that you'll ever learn much of anything.  So what is happening when we apply mental focus - it must involve some sort of shifting of brain resources away from mechanisms that support our own churning thoughts to the area of interest.

Is the idea that we have conscious control and decision-making power somehow magical?  I would say no - a person uses consciousness to direct focus and attention, and behind the scenes all sorts of things are happening in the brain, some of which we are aware in some sense, and other things we have no awareness of.

I decided the book was not worth reading in full, but my quick skim did provoke some thinking!

30 October 2012

Neuroscience and Economics

"The Marketplace in Your Brain" by Josh Fischman over at The Chronicle of Higher Education (Sept 24, 2012) has an interesting look at how economics may want/need to start using neuroscience findings to help explain and model decision-making.  On the economics side, there is some feeling that brain findings do not necessarily add anything to simply observing behavior, and I have to say I tend to agree.

One experiment explores the finding that people will often reject what are perceived to be 'stingy' sharing offers (even though it means they will get nothing rather than something).
Jonathan D. Cohen, a neuroscientist at Princeton, went looking for the seat of that impulse. He asked 19 people to play ultimatum games with stingy offers. Two areas of the brain were active when people considered what to do. One, near the front of the brain, is called the dorsolateral prefrontal cortex and is linked to deliberative thought and calculation. The other, deeper in the brain, is tied to emotions like disgust. It's called the insula. The stingier the offer, the more insula activity Cohen's team saw. When people actually rejected the offer, this activity peaked higher than did activity in the deliberative-thought area. It appears, Cohen says, that two areas are competing in some way, and that negative emotions—or the desire for justice—can trump people's rational desire to get more.
While the brain findings here perhaps are interesting, and shed light on brain function, I'm not sure it really explains the behavior at any deeper level than we can observe without them.

The article also discusses the funding angle.
Much of the NIH money comes from its institutes for drug addiction, mental health, and aging. "Most of us, to get funding, have to sell our ideas along disease lines," says Phelps. "Drug addiction is an obvious area where understanding reward-seeking behavior is important, and our work is clearly related to that."
The NIH wants to know more about choices because it's clear that many people understand what's needed to stay healthy but choose not to do it, says Lisbeth Nielsen, chief of the branch of individual and behavioral processes at the National Institute on Aging. "We're very interested in decision-making and aging," she says. "And that's not just health decisions but choices about insurance plans or how to manage your retirement savings. Are changes in choices related to the underlying neurophysiology? Or is it the environment? You won't know unless you get input from different sciences, and that's what neuroeconomics brings to us."
Again we see that often the neuroscience is driven by perceived disease rather than study of 'normal' behavior.  But it does seem worthwhile to get to a better model of human decision-making, in particular around areas where it seems like the decision-making is generally poor.

22 October 2012

The Meaning of Mind (1996) - Thomas Szasz

The subtitle of The Meaning of Mind by Thomas Szasz is "Language, Morality and Neuroscience" - but it's important to note the Szasz was a professor of psychiatry.  This is a short book, and a dense one as well - I never read more than about 20 pages at a time.  I felt that Szasz makes some important points that I agree with, and he is also pretty damn funny at times.  His approach may also be described as arrogant.

One of his main points is to argue against the common (mis-)use of language that equates mind with brain.  At a purely linguistic level this equation seems to fall apart quickly.  As he puts it on page 92, "When a journalist wants to categorize a crime as particularly heinous, he calls it 'mindless,' not 'brainless.' The point is obvious. A brainless person cannot commit a crime, just as an eyeless person cannot see."  Further down the page,
The terms 'brain' and 'mind' belong to different conceptual categories and different modes of discourse.  The brain is a bodily organ and a part of medical discourse. The mind is a personal attribute and a part of moral discourse.  So long as we view personal conduct commonsensically, we attribute (bad) behavior to the mind or, more precisely, to the person who displays it.  However, once we view such conduct psychiatrically (and legally), we typically attribute it to the brain: We say that the bad man is mad, or that the madman is bad, because he has a brain disease.
For Szasz, the role of personal responsibility is key, and he sees both psychiatry and neuroscience as chipping away at responsibility, turning many matters into questions of the neuro chemical mix.  Szasz argues that in fact mind is more of a verb than a noun - i.e. we can't point to a thing identified as the mind, but we can talk about a person 'minding'.  If we reduce all behavior to brain activity, then there's no telling how creative people will get about claiming behavior as brain disorder, and then coming up with neuro-chemical 'solutions.' (see also this earlier post:  Did your brain make you do it?)

I agree with the point that 'mind' and 'brain' are in two different categories, and while mind (or minding) seems dependent on an operating brain (and nervous system and sensory inputs and some body!), to explore the mind is to speak about experiences, not neurons and neuro-chemicals.  I'm not sure about the mind being only part of a "moral discourse" however.  The point of this blog is that there can be scientific explorations of the experiences of mind, and how these experiences seem to create a feedback loop with the brain (i.e. from simple attention/learning to meditation and its impact on brain waves, etc.).

Thus I fully agree with this point from Szasz in his epilogue (p. 140):
I dare say there is something bizarre about the materialist-reductionist's denial of persons. To be sure, brains in craniums exist; and so do persons in societies. The material substrates of a human being - a person - are organs, tissues, cells, molecules, atoms and subatomic particles. The material substrates of a human artifact - say a wedding ring - are crystals, atoms, electrons in orbits, and so forth. Scientists do not claim to be able to explain the economic or emotional value of a wedding ring by identifying its material composition; nor do they insist that a physicalistic account of its structure is superior to a cultural and personal account of its meaning. Yet, many scientists, from physicists to neurophysiologists, claim that they can explain choice and responsibility by identifying its material substrate - that "life can be explained in terms of ordinary physics and chemistry."  (Editorial note: the last bit is a quote from a Nature article).
I frequently get this same sense about neuroscientists' writing - it's trying so hard to do away with any kind of dualism that it also seems to go hard reductionist about all human behavior, and I don't think it's a necessary or appropriate way to learn about minding.

21 October 2012

Scientists read dreams

This article from Nature, "Scientists read dreams" on October 19, 2012 by Mo Costandi, reports on some findings of brain scans performed just prior to waking dreaming subjects.  Basically the researchers linked activity in various visual processing areas to the reported dream content of the dreamers.  The basic finding comes down to this:
The findings, presented at the annual meeting of the Society for Neuroscience in New Orleans, Louisiana, earlier this week, suggest that dreaming and visual perception share similar neural representations in the higher order visual areas of the brain.
I think this is interesting as a confirmation of how at least some part of dreaming operates in the brain, but as lead researcher Yukiyasu Kamitani of the ATR Computational Neuroscience Laboratories in Kyoto, Japan says,“Knowing more about the content of dreams and how it relates to brain activity may help us to understand the function of dreaming.” Maybe!

30 August 2012

On-line, voluntary control of human temporal lobe neurons

'On-line, voluntary control of human temporal lobe neurons' is the title of a paper published in Nature back in October 2010 on a study led by CalTech's Moran Cerf, but it came to my attention via this recent BoingBoing post (about how the paper results were wildly misinterpreted).

What I found interesting about these findings is that it seems to give experimental evidence that mental concentration can in some way influence neuron behavior.  Or as the paper puts it, "At least in the MTL, thought can override the reality of the sensory input." [MTL is Medial Temporal Lobe].  The experiment involved epilepsy patients with electrodes in their brains, who then did a test in viewing pictures online.

It's not a big surprise to me that this is possible, but it's great to see some scientific verification.  And still no one really understands what exactly happens when one 'concentrates' or 'pays attention' but clearly it can result in changes to the way the brain behaves.  I continue to be most interested in just what can be achieved via methods of concentration.

But I do think this closing line is important: "Our method offers a substrate for a high-level brain–machine interface using conscious thought processes."

Here's some coverage that the paper got at the time in Time: Controlling Your World With a Single Neuron by Jeffrey Kluger.  I'm not quite sure why the focus is always on what these findings might enable for disabled people...  isn't it even more interesting what it implies for 'normal' folks?

28 August 2012

A User's Guide to Thought and Meaning - Ray Jackendorff (2012)

I picked up A User's Guide to Thought and Meaning while in Boston at the MIT Press bookstore this spring, and it has a tangential stream that was of interest to me in relation to this blog. The book attempts to examine language, meaning and thought from a cognitive perspective.

The concept I most struggle with is the "Unconscious Meaning Hypothesis" (UMH): this says that of the three structures that make up a linguistic expression (phonology/pronunciation, syntax/grammar and semantics/meaning), "the one that most resembles the experience of thought is phonology." (p. 103). In other words, there's an emphasis on the interior pronunciation of words as the key to awareness of thinking ("we can only be aware of the content of our thoughts if they're linked with pronunciation" p. 90). Here's more on the idea, in comparison with other primates:
One difference is that we have language - the ability to convert our thoughts into communicable form by linking them to pronunciation. According to the UMH, this linking bestows on us a second difference: language enables us to be conscious of our thoughts in a way that animals can't be. But it's not through awareness of the thoughts themselves. Rather, it's through awareness of the phonological "handles" linked to the thoughts, which other animals lack. 
In short, beings without language can have thoughts, and our consciousness derives its form from the pronunciation of the inner voice, not directly from our thoughts themselves. So thought and consciousness aren't the same thing at all. (p. 109)
Jackendorff separates meaning from a "feeling of meaningfulness" - he writes "Meaning is unconscious." (p. 111).

This all feels pretty jumbled to me!  While it does seem true that a fair amount of what arises in our conscious mind is in the form of thought as language (i.e. internal chatter), I don't think that's all that's there, nor do I think there's such a disconnect from meaning, nor that all meaning is unconscious.  There is certainly the struggle to put a thought into words, but if we can detect that the linguistic expression is not conveying the meaning properly, then it seems to me that the meaning is in some way conscious.

In an earlier chapter, Jackendorff references Wittgenstein, with this quote:  "One is tempted to use the following picture: what he really 'wanted to say', what he 'meant' was already present somewhere in his mind even before we gave it expression." (p. 83, from W's Philosophical Investigations).  Whether this indicates W thought that the meaning was truly conscious or not is unclear.

However in a happenstance I came across an article on Wittgenstein today by Ray Monk, called "Ludwig Wittgenstein’s passion for looking, not thinking" which I liked.  Here's one bit:
Like Freud, Wittgenstein took very seriously indeed the idea that our dreams present us with a series of images, the interpretation of which would reveal the thoughts we have relegated to the unconscious parts of our minds. "If Freud’s theory on the interpretation of dreams has anything in it," Wittgenstein once wrote, "it shows how complicated is the way the human mind represents the facts in pictures. So complicated, so irregular is the way they are represented that we can barely call it representation any longer."
This book surely deserves a more well-thought out response, but my bottom line is that I don't really buy into the UMH.  It seems to me that we need even more clarity in our language to express meanings - and I am most intrigued by the interactions of the conscious and the unconscious.  In my view the mind includes at least portions of what we might term unconscious - for instance those things that we have learned so well that we don't have to pay conscious attention to them, like riding a bicycle.  Yet if we encounter a dangerous or confusing situation on a bicycle, suddenly we can "shift gears" and become hyper aware of minute decision-making steps to attempt to avoid catastrophe.

13 August 2012

Subliminal - Leonard Mlodinow (2012)

Subliminal is subtitled 'How Your Subconscious Mind Rules Your Behavior' and the dustjacket features some subtle text (distinguished from the background by being a bit shinier) that says 'Pssst... Hey There Yes: You, Sexy. Buy This Book Now. You Know You Want It.' This marketing ploy is a bit of a giveaway that this book is basically reviewing old tricks, not exploring new ground. The subtitle harkens back to books of the 1970s like 'Subliminal Seduction', a supposed investigation of such advertising techniques.

However - the notion of the subconscious is indeed interesting. The book mostly explores existing research on all sorts of brain/mind processing that is going on that we are not consciously aware of, yet seems to greatly influence our decision-making. Such as the finding that IPOs of companies with complicated names seem to perform worse than those of companies with easy-to-say names. Such as the subtle cues of social dominance that pervade human interactions. And so on...

Overall I'd compare this book to 'Brain Wars' - it's again a nice overview for those new to the subject, but probably not worth the time of folks who've read in the field already. The good news is that after a long period where science essentially ignored the idea of the subconscious, it is now a pretty hot area of research.

The area where I'd like to see more research on is about the process of learning which proceeds from conscious study and concentration to eventual near-unconscious mastery. How does this work? How does one feed more processing into the subconscious? Can subconscious processing be 'overridden' by conscious effort over time?

03 August 2012

Did your brain make you do it?

Recent story from the NYT by John Monterosso and Barry Schwartz, July 27, 2012: "Did your brain make you do it?" about the concept of responsiblity. Basic stance is here:
it’s worth stressing an important point: as a general matter, it is always true that our brains "made us do it." Each of our behaviors is always associated with a brain state. If we view every new scientific finding about brain involvement in human behavior as a sign that the behavior was not under the individual’s control, the very notion of responsibility will be threatened. So it is imperative that we think clearly about when brain science frees someone from blame — and when it doesn’t.
The authors describe an experiment they carried out to gauge people's sense of causality under different scenarios.
In our experiment, we asked participants to consider various situations involving an individual who behaved in ways that caused harm, including committing acts of violence. We included information about the protagonist that might help make sense of the action in question: in some cases, that information was about a history of psychologically horrific events that the individual had experienced (e.g., suffering abuse as a child), and in some cases it was about biological characteristics or anomalies in the individual's brain (e.g., an imbalance in neurotransmitters). In the different situations, we also varied how strong the connection was between those factors and the behavior (e.g., whether most people who are abused as a child act violently, or only a few).
They found that people seemed to either blame biological causes (i.e. brain injury) or psychological causes (i.e. intentions), a way of thinking they term "naive dualism". Rather than either/or, they point toward a more probabilistic model.
A better question is "how strong was the relation between the cause (whatever it happened to be) and the effect?" If, hypothetically, only 1 percent of people with a brain malfunction (or a history of being abused) commit violence, ordinary considerations about blame would still seem relevant. But if 99 percent of them do, you might start to wonder how responsible they really are.

20 July 2012

What we 'know' and how it changes us

I noticed a couple stories recently that indicate some interesting relationships between what we think we know, and how that impacts our subsequent reaction to things.  The first involves attitudes about climate change, in the story 'Ideology clouds how we perceive the temperatures' by John Timmer at ArsTechnica:
When it comes to things like flood and droughts, most people seem to have accurately registered the recent trends in their area. But when the subject shifts to temperatures, the actual trends become irrelevant, and ideology and political beliefs shape how people perceive things. As the authors put it, "the contentious nature of the climate change debate has influenced the way in which Americans perceive their local weather."
The second story is about so-called 'nocebos' - 'Are Warnings About the Side Effects of Drugs Making Us Sick?' by Steve Silberman. The idea here is that when people are told of the potential harmful side effects of various drugs, some people experience those side effects even when they are given sugar pills!
A placebo, you might say, is an ersatz drug that makes you feel better, while a nocebo is a fake drug that makes you feel worse. Of course, in both cases, it’s not the pill that’s doing the work; it’s your own body, responding to the social context in which you take the pill. If a skilled doctor with kindly bedside manner tells you that drug X will reduce the inflammation of a minor injury, it often will — even if the drug itself is nothing but a capsule full of lactose, milk sugar. One of the astonishing things we’ve discovered about the placebo effect in recent years is how wide a range of ailments can be ameliorated by it, at least temporarily — from chronic pain, to high blood pressure, to inflammation, to depression and anxiety, to sexual dysfunction, to the nausea and vomiting caused by chemotherapy. Perhaps unsurprisingly, it turns out that the nocebo effect is equally capable of making you feel more miserable, in a similarly broad range of ways.
To my thinking at least, these stories both indicate that belief and suggestion can have powerful effects both on thinking and on the physical body.  You may want to keep an eye on Steve's blog NeuroTribes.

28 June 2012

Sterling on Turing & gender

Author Bruce Sterling almost never fails to come up with interesting takes on a variety of subjects.  He recently gave a talk on the occasion of the 100 year anniversary of Alan Turing's birth, dealing with the Turing test and more broadly about cognition vs. computation.  Sterling points out that Turing's original description of the imitation test is like this:  "In the original Turing imitation game, you’ve got three entities: a judge, a woman, and a machine pretending to be a woman."  Sterling spins out some ideas around the role of gender in consciousness, in AI, etc.  Definitely worth reading in full, but here are a couple passages of interest to this blog.

You could argue that “masculinity” has nothing to do with "intelligence." I might even agree with you, but if my masculinity isn’t an aspect of my so-called intelligence, what is it?

Mathematics may be sexless, but do we really believe that cognition is some quality we have that is strictly divorced from gender? How can you properly claim that you understand how human brains work, if you can’t create a system that expresses a female sexual identity? Because billions of brains do that every day, and it’s not rare, because women are the majority gender. Where is that aspect of human intelligence supposed to be hiding? Is femininity non-algorithmic? Is femininity a Turing non-computable problem?
Sexuality is eons older than intelligence. We’re not abstract mathematical systems somehow burdened by gender. We are living entities produced by sexual means. Those are the facts of life.

We don’t yet know how cognition works. It wouldn’t surprise me to learn that sexual hormones, such as estrogen and testosterone, are fundamental to cognition and even to conscious self-awareness. We should have a spirit of humble inquiry toward cognition. We know far more about it than we did when we invented body-mind duality, but it’s a large, dark area.

22 June 2012

Christian DeQuincey on Consciousness

A few weeks back I was perusing things in a rather New Agey shop in Port Townsend, WA, and came across a book that I decided to pick up.  Christian DeQuincey's Consciousness From Zombies to Angels (2009) is not as frivolous as it may sound - zombies are not only on TV these days, they play an important role in thinking about consciousness (or the lack thereof).  DeQuincey is a philosopher, and he argues that there is just no way that consciousness, or subjectivity, can ever arise from simply physical objects - there's an abiding mystery there of how consciousness could simply emerge of some excessively complex organization of material 'stuff' (such as, for example, our physical bodies with the brain).

So what is he proposing instead?  The basic idea is that some sort of consciousness permeates all physical objects, at all levels - termed panpsychism.  This is admittedly a hard notion to get one's mind around, and  given how hard it is to even get a handle on whether other animals like dolphins have consciousness (or whether some people around us are actually zombies), it's worth pondering.

But let's back up a bit.  What does DeQuincey think consciousness is or is not?  He argues against the idea of consciousness as 'energy' - "let's just realize the simple fact that all forms of energy are spread out in space.  Consciousness, however, doesn't hang out in any kind of space.  You can't see it, touch it, hear it, smell or taste it.  It's just not that kind of thing.  In fact it's not any kind of thing." (p 18) And this distinction causes what he analyzes as the big blind spot of science - that it has made the physical world the focus of all study, and thus has essentially pushed consciousness out of the field of study.

DeQuincey proposes a shift to what he terms 'looking-glass science' where there is a recognition of the role of the scientist's dual role of observer and the observed, a participatory practice - "Consciousness cannot be studied from the outside; it must be viewed from within." (p. 151).  Elsewhere - "Every item of scientific knowledge - the entire edifice of science - exists only because the data was experienced in some scientist's mind." (p. 143).

I liked several things that I found just in flipping through the book in the store.  One factor was that he is much more careful in his use of quantum mechanical ideas than most of the new age writers.  He writes: "It is not the case that the probabilities expressed in the quantum wave function are 'limitless' or represent 'unlimited potential.' The matrix of possibilities expressed in the mathematics of wave mechanics is a limited set of options, and the collapse of the wave function on observation brings one of those options into actuality" (p. 102).

Many may find DeQuincey's work insufferably new age and non-science, but I believe he does pinpoint some key problems in the overall (neuro-)scientific approach to study of consciousness and the mind from the outside via observations of the brain (which still must rely on subjective reports of corresponding experience).

11 June 2012

Psychology and Neuroscience - Sarah-Jayne Blakemore

The Edge 370 features Sarah-Jayne Blakemore, who studies adolescent brain development.  The overall news on neuroplasticity is good:
The idea that the brain is somehow fixed in early childhood, which was an idea that was very strongly believed up until fairly recently, is completely wrong. There's no evidence that the brain is somehow set and can't change after early childhood. In fact, it goes through this very large development throughout adolescence and right into the 20s and 30s, and even after that it's plastic forever, the plasticity is a baseline state, no matter how old you are. That has implications for things like intervention programs and educational programs for teenagers.
But I found this passage quite revealing:
One interesting thing to think about, when you're thinking about brain imaging, is why is brain imaging important? What does it teach us that we didn't already know from psychology studies? This is a really important question that a lot of people are asking. Why does it matter that we know that one part of the brain is involved with a process? Why does that matter more than just knowing about this process from a kind of psychological point of view? For example, if you know that one method of teaching works better than another method of teaching, so one method of memory rehearsal worked better than another method, why does knowing that the hippocampus is more involved in one than the other? Why is that useful? Does it tell you any more than you already knew from the psychology results or the education result? I think this is a very open question and often, actually, especially when you're talking about the implications of neuroscience for education, actually, often it's the case that is sort of seduced by these brain images, and we see them and they are very tangible and people suddenly think, "Oh, my God, it has a biological basis," and they somehow seem more convincing and attractive than just pure psychology results. But often they don't really tell us anything more.
This raises the issue that brain imaging itself is not really revealing much without the component of the subjective experience (or at least some sort of behavioral evaluation).  Psychology, and understanding of the subjective experience obviously still matters!

01 June 2012

Brain Wars - by Mario Beauregard (2012)

Brain Wars is a fairly light review of findings battling against the reductionist, materialist view that all the 'mind-stuff' is illusion, nothing more than neurons firing.  Just 214 pages of text, it's an easy read, and I suppose it does a reasonable job of pointing out various scientific findings that indicate that one's thoughts, intentions and beliefs can have an impact on the brain and body.  There are chapters on neurofeedback, neuroplasticity, hypnosis, psi, near-death and mystical experiences among others.  In the conclusion Beauregard suddenly brings up quantum mechanics and nonlocality as a scientific basis for reconsideration of consciousness, which I found to be trivializing and a bit of a tease.

Anyone who has read much in the field will likely find little new information here, but it may be a good introduction for newcomers to this question of the role of mind.

12 May 2012

Eagleman on the downloading question

Found this entry on David Eagleman's blog (associated with his book Incognito): Silicon Immortality: Downloading Consciousness into Computers.  He writes:

We are on a crash-course, however, with technologies that let us store unthinkable amounts of data and run gargantuan simulations. Therefore, well before we understand how brains work, we will find ourselves able to digitally copy the brain's structure and able to download the conscious mind into a computer. 
If the computational hypothesis of brain function is correct, it suggests that an exact replica of your brain will hold your memories, will act and think and feel the way you do, and will experience your consciousness — irrespective of whether it's built out of biological cells, Tinkertoys, or zeros and ones.
I've got a few quibbles with this.

1.  It sounds to me like the project is about recreating brain structures in computers.  Whether this computer, when operating, has what any of us think of as consciousness, is pretty tough to confirm (given that we can't really confirm it with other people today).
2. He claims 'immortality' - but this digital simulation is not the same as current embodied self.  Even if we assume it is a 'conscious being' with all of our memories (as of some point of download, I guess), it is now on a separate path, and it is a separate being.  Perhaps other people might think of it as being very much like the original person, but its conscious experience is now on a new path.
3.  The usual equivalence of "brain structure" and "conscious mind" erases all the distinctions I'm interested in!

10 May 2012

What does meditation do to the brain?

New York Times article - "In Sitting Still a Bench Press for the Brain" by John Hanc ran on May 9 2012.  It's a short report on some basically inconclusive studies looking at the physical impacts of meditation, including one published in February, conducted by UCLA and led by Dr. Eileen Luders.

A striking finding of the study was that the degree of cortical gyrification appeared to increase as the number of years practicing meditation increased. 
“We used to believe that when you were born, your brain would grow and reach a peak in the early 20s and then start shrinking,” Dr. Luders said. “It was thought there was nothing we could do to change that.” Her research suggests that there might be. As a meditator for four years, Dr. Luders understands the degree of mental discipline involved. “People ask, ‘What do you do? Just sit there with your eyes closed?’ It’s actually hard work, because you have to make a constant mental effort.”

The brain... It makes you think. Doesn't it?

The UK Guardian ran a nice little debate between David Eagleman and Raymond Tallis on the role of the brain and the unconscious processes in our behavior (April 28 2012). Eagleman is a neuroscientist interested in the neural correlates of mental activity (and author of Incognito), and Tallis is a professor of medicine who challenges just how truly important the unconscious processing really is. I think it sums up the debate pretty well. While I side closer to Tallis, I think his style is a little obnoxious, and Eagleman keeps it polite. Here's a bit of it:

Eagleman -A person is not a single entity of a single mind: a human is built of several parts, all of which compete to steer the ship of state. As a consequence, people are nuanced, complicated, contradictory. We act in ways that are sometimes difficult to detect by simple introspection. To know ourselves increasingly requires careful studies of the neural substrate of which we are composed.

Tallis: Some of what you have just said sounds like common sense and a retreat from the radical thesis advanced in Incognito. There you put unconscious brain mechanisms in the driving seat – which is why your book has attracted such attention – and argue that important life decisions are strongly influenced by "the covert machinery of the unconscious".

Even when you concede in Incognito that "consciousness is the long-term planner", you still can't let go of the idea of the largely unconscious brain being in charge. This is because you want to privilege brain science. Your case is assisted by personifying the brain, as when you say things like "the brain cares about social interaction".

22 April 2012

Why everyone (else) is a hypocrite - Robert Kurzban (2010)

Robert Kurzban is associate professor of psychology at the University of Pennsylvania, and his 2010 book entitled Why everyone (else) is a hypocrite is a view of the modular theory of mind.  The basic idea is that the forces of natural selection have resulted in humans having a brain that supports a variety of functions, which are not necessarily united or made consistent.  Kurzban writes: "A module is an information-processing mechanism that is specialized to perform some function."  They key is that the modules are not all sharing information with each other, which throws into question the notion of a singular "you".

As a simple example of this notion, he cites various optical illusions, such as one where lines with different arrow point endings appear to be of different lengths, but are actually the same - our perceptions 'tell' us one thing about what we're seeing, while we can 'know' via measurements what the situation actually is. There are a variety of modules which serve different purposes, and the claim is that these modules are acting essentially independently at least in some circumstances.  That in some ways is not so different from what we may be used to from discussion of the conscious mind and the unconscious.

This view of mind leads Kurzban to question the notion of the unitary 'self' - and thus there are many passages along these lines:
It's often appealing to talk about what "I" "believe," or what "you" "believe" - and , in real life, it's often good enough.  But when you're trying to figure out how the mind works, it's important to think about modules, even when making seemingly simple claims that Person X believes p.(p. 72)
He covers a number of different experiments and findings to support this modular view - and one of the key ideas is that context matters in how we process various situations.  For instance, it is very difficult to support the notion that people have fully rational and describable preferences - they seem to vary based on the context of the choice, such as whether it's known to the chooser that other people are involved and that the choice will be known by those others.

Relating things to the title of the book, the concept is that the various modules may have functional reasons for having contradictory 'views' - and if the information is never brought into one resolved view, then we all have various tendencies toward hypocrisy.  Depending on the situation and context, different functional aspects may come to the fore.

Kurzban introduces the notion of the 'press secretary' module of mind - that which communicates to others.  Extending the metaphor, it is often good for the press secretary to be unaware of certain activities, so that no falsehood is given and strategic goals are advanced - and this is tied into some evolutionary arguments of why this may have developed.

Overall I found this to be quite an interesting take on the subject.  But I do have my quibbles.  For one, Kurzban seems to equate 'mind' and 'brain' - and I believe this erases some very important distinctions.  I see mind as an experiential concept - one which may be 'hosted' in the brain, but is not itself aware of all the inner workings of that brain.  So when Kurzban writes things like "I'm not going to talk at any length about where putative modules are, physically, in the mind" (p. 47), I think the reference should be to the brain, a physical object.  And I believe the mind is what provides enough stability and consistency to our experience and our presentation of ourselves to other people for the concept of a 'self' to be important.  Which is not to say our conscious experience is aware of all the modular functions going on in the brain - indeed we are blissfully unaware of much of the goings-on in the brain.

I saw that he includes a note on his use of the word 'design' in terms of brain functions:  "Some people don't like the word "design" to be used in the way that I am using it here.  As the material in this chapter should make clear, I intend no consciousness or intention when I use the word." (p. 224).  I can't help feeling that the word 'design' implies a designer and an intention (as well as the potential for designed things to be used in ways they were not designed for), so I am one of those people who don't like its use in this way.  I think it would be sufficient to talk about functional aspects of modules that evolved however they evolved.

20 March 2012

Thinking, Fast and Slow - by Daniel Kahneman (2011)

Daniel Kahneman's life work goes into Thinking, Fast and Slow - the title sums up a good portion of what the book explains, our two main modes of thinking.  He terms the modes System 1 (our essentially instant, intuitive mode of coming up with a quick answer to many questions and decisions) and System 2 (the more deliberative, conscious thinking that we do when we have to - he calls it a lazy system).  System 1 seems to basically be unconscious, and in many situations does a very creditable job of keeping us out of trouble.

But what Kahneman (and his former colleague Amos Tversky) were interested in was the situations in which System 1 goes wrong - systematic biases which steer us away from what economists think of as the rational choice.  And there are many!  Their researches looked at anchoring and framing effects, and the way we tend to fear a small chance of loss as compared with the value we put on small chances of gain.

Note that System 2 is not always perfect either. Especially when it comes to statistical and probability-based questions, even professionals often go astray in their thinking unless they are very careful.  And some of our errors may in fact be helpful in certain ways - for example a bias toward optimism very likely helps make people attempt much more than they would otherwise, and sometimes they succeed!

This book is a readable summary of years of interesting work, and sheds much light on how we all tend to think.  The point is not that we can avoid error, but some awareness of systematic bias can help to trigger System 2 when it can help!

Here's a good review from the NYT by Jim Holt.

10 March 2012

'Free Will' by Sam Harris

The question of free will has been bouncing around for a long time, and recent neuroscience is leading some to conclusions that I find needlessly restrictive in their outlook.  I decided to read Sam Harris's latest short piece on "Free Will" to test my thinking.  Here are some thoughts, driven by my reactions to various lines from the eBook.

Harris doesn't do a good job, to my taste, of defining what he means by 'free will' or 'freedom'.  Early on he writes: "Free will is an illusion. Our wills are simply not of our own making." (p 5).  The closest thing I saw to a common definition is this:  "The popular conception of free will seems to rest on two assumptions: (1) that each of us could have behaved differently than we did in the past, and (2) that we are the conscious source of most of our thoughts and actions in the present." (p. 6).  I'm going to ignore part 1, since it's impossible to experimentally test whether one could 'repeat' a choice situation and choose something different.  So I'll focus on #2.

My take on freedom is that there is no such thing as 'complete freedom' - this would seem to me to indicate that there are absolutely no constraints in any way, and I don't see such a situation ever existing.  So freedom is always a relative concept.  One is more free when there are fewer constraints on one's actions, and less free when there are more constraints.

Harris does a great job in pointing out that there are many constraints on our choices.  He writes:  "Unconscious neural events determine our thoughts and actions—and are themselves determined by prior causes of which we are subjectively unaware." p.16.  I think this is largely true - though in a way it's simply a tautology to say we aren't conscious of that which goes on unconsciously.  But I think the focus should be on that which we are consciously aware: situations where we consider alternatives, think about possibilities, and finally decide upon a course of action.  Even if our preferences aren't consciously created, and we don't understand the basis of our decision-making, we still subjectively face decisions and make choices.  Decisions are sometimes difficult, and over time our decision-making may change, notably because we have learned something from past decisions and behaviors.

Frequently Harris poses the question "Where is the freedom?" if so much is constrained by the past, and we can't account for where our desires and preferences come from.  I would say that if freedom is about movement with constraints, then it doesn't necessarily matter that we don't know why we want what we want.  We also don't know why there is gravity, but there is, and we are constrained by it.  So too we are constrained by certain preferences, some of which we could probably alter if we choose to, and some of which seem to alter over time without any conscious effort.

Harris dwells on the fact that we did not choose much of our past experience.  "Take a moment to think about the context in which your next decision will occur: You did not pick your parents or the time and place of your birth. You didn’t choose your gender or most of your life experiences. You had no control whatsoever over your genome or the development of your brain." (p. 40).  As you would expect from this blog, I dispute the last point, on the development of your brain.  I believe that choices and behaviors we choose today will impact our brain and our unconscious processing in the future.  When we attempt to learn something, we have to concentrate consciously, and think about each new choice.  As we master a subject or learn how to do a physical task, we don't have to try so hard consciously - we've absorbed it in our brains, we can take action unconsciously.  This adds some weight to the compatabilist position that Harris rejects, in which the individual must be considered as more than just the conscious awareness.

So my sense is that Harris does describe many of the true constraints on our will, but I don't agree with his take that you must understand and control every underlying process to achieve 'freedom'.

03 March 2012

On Free Will

The piece 'Is Neuroscience the Death of Free Will' by Eddy Nahmias (NYT Opinionator, Nov 13, 2011) sums up my take on this matter pretty well.

Many philosophers, including me, understand free will as a set of capacities for imagining future courses of action, deliberating about one’s reasons for choosing them, planning one’s actions in light of this deliberation and controlling actions in the face of competing desires. We act of our own free will to the extent that we have the opportunity to exercise these capacities, without unreasonable external or internal pressure. We are responsible for our actions roughly to the extent that we possess these capacities and we have opportunities to exercise them.

These capacities for conscious deliberation, rational thinking and self-control are not magical abilities. They need not belong to immaterial souls outside the realm of scientific understanding (indeed, since we don’t know how souls are supposed to work, souls would not help to explain these capacities). Rather, these are the sorts of cognitive capacities that psychologists and neuroscientists are well positioned to study.
The range of comments are also interesting.  Many folks seem very opposed to the idea of free will - but then again, have they any choice?

Also liked this aside from Robert Anton Wilson (found here):
"Incidentally, you can get a quick estimate of a person's intelligence by asking them how much of themselves is robotic. Those who say "not at all" or "less than 50%" are hopeless imbeciles, always. The few who say "about 99%" are worth talking to; they are quite intelligent."