22 December 2014

Modeling the worm!

Recent physical simulation of the c. elegans 302-neuron worm with Lego, as reported here at the I Programmer site on Nov. 16, 2014, "A Worm's Mind In A Lego Body" by Lucy Black.  This is a nice follow-up to my May 2013 post Modeling Simple Worms.  The claim here is that the computer model of these neurons is able to produce simple physical behavior that is like the worm behavior (note that the worm is very small and only capable of simple behavior).  There's a video that shows the lego model in action.
It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward.

The key point is that there was no programming or learning involved to create the behaviors. The connectome of the worm was mapped and implemented as a software system and the behaviors emerge.

The connectome may only consist of 302 neurons but it is self-stimulating and it is difficult to understand how it works - but it does.
As for the claim about the worm's mind...  well we don't know much about a worm's mind, so how could we know if this model captures it?

The simulation project is run by Tim Busbice at The Connectome Engine.  Another story on the simulation at the New Scientist site "First digital animal will be perfect copy of real worm."

20 December 2014

Some AI items - what are the limits?

Rodney Brooks pushes back: "artificial intelligence is a tool, not a threat" (Nov 10 at the Rethink Robotics blog) - here's a piece:
Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data.  This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine.  But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.
Gary Marcus has also been writing on the topic, such as this piece from October 24 at the New Yorker: "Why We Should Think About the Threat of Artificial Intelligence":
Barrat's core argument, which he borrows from the A.I. researcher Steve Omohundro, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence. In Omohundro's words, "if it is smart enough, a robot that is designed to play chess might also want to be build a spaceship," in order to obtain more resources for whatever goals it might have.
Marcus chats with Russ Robert on his Econtalk podcast posted Dec. 15.

25 October 2014

What can the mind-set do to the body?

 That's the question at issue in the recent NYT article by Bruce Grierson "What if Age Is Nothing but a Mind-Set?" covering the work of Harvard psychologist Ellen Langer.  Langer has done various researches where people are "primed" with positive information about their situation - that they have control, are responsible, etc. and seen frequent improvements.

The title study done in 1981 involved bringing a group of men in their 70s into a controlled environment simulating 1959 for 5 days, and then evaluated them on various measures.  In various ways they appeared to be "younger" afterwards - in manual dexterity, in sitting taller.  Other previous studies had led her to this way of thinking about priming:
To Langer, this was evidence that the biomedical model of the day — that the mind and the body are on separate tracks — was wrongheaded. The belief was that “the only way to get sick is through the introduction of a pathogen, and the only way to get well is to get rid of it,” she said, when we met at her office in Cambridge in December. She came to think that what people needed to heal themselves was a psychological “prime” — something that triggered the body to take curative measures all by itself.
If we believe the mind to be the result of a physical process, then I don't see it as too far-fetched to believe that different mind-sets can manifest in different physical outcomes.  This is of course related to placebos generally:
Langer came to believe that one way to enhance well-being was to use all sorts of placebos. Placebos aren’t just sugar pills disguised as medicine, though that’s the literal definition; they are any intervention, benign but believed by the recipient to be potent, that produces measurable physiological changes. Placebo effects are a striking phenomenon and still not all that well understood. Entire fields like psychoneuroimmunology and psychoendocrinology have emerged to investigate the relationship between psychological and physiological processes. Neuroscientists are charting what’s going on in the brain when expectations alone reduce pain or relieve Parkinson’s symptoms. More traditionally minded health researchers acknowledge the role of placebo effects and account for them in their experiments. But Langer goes well beyond that. She thinks they’re huge — so huge that in many cases they may actually be the main factor producing the results.
Now Langer is taking the research to an extreme - setting up a positive situation for women with stage 4 breast cancer, which the medical establishment essentially has no answers for. While it's hard to believe that this will work, it still seems to me to be an avenue worth pursuing.

12 October 2014

Poor title edition: "Are We Really Conscious?" in NYT

Article is by Michael Graziano from Princeton, "Are We Really Conscious?" posted Oct. 10, 2014.  I think the better headline would be "What Are We Really Conscious Of?" (I recognize that the authors usually don't pen the headlines).  In any case, Graziano has a theory about our awareness being a distortion of the reality, and that basic point is not a new one.  But there is one line which I object to, and it seems like Graziano himself contradicts it in the article.

Midway through, he writes: "But the argument here is that there is no subjective impression; there is only information in a data-processing device." ('Device' here referring to a brain). This is the classic reductionist move - it's only data processing. There's a non-recognition of the potential for levels of complexity and organization that give rise to interesting phenomena in their own right.  The reduction of a brain, which is a really interesting thing, not yet well understood, to a "device".

Further in the paragraph:  "The brain’s cognitive machinery accesses that interlinked information and derives several conclusions: There is a self, a me; there is a red thing nearby; there is such a thing as subjective experience; and I have an experience of that red thing."  Now this is at least a little more open to interesting investigation - what might we mean by self? what is a subjective experience?

My conclusion these days is that a coherent view of 'self' is at the very least the entire organism (i.e. my whole body), and probably it needs to go beyond that, to extend some way into the environment.  So does my self have experiences - yes, I think it certainly does.  My body (which obviously includes my brain) reacts to experiences.  Experiences appear to have an information processing aspect or basis, and that's very interesting, but to end there is missing at least half the story (IMHO).

By the end, Graziano writes: "In this theory, awareness is not an illusion. It’s a caricature. Something — attention — really does exist, and awareness is a distorted accounting of it."

Ok, so which is it - do we have subjective experience or not?

27 July 2014

Vaughan Bell on Neuro Metaphors - The Observer.

 Bell sums up how the metaphors help and probably limit us.  'From photography to supercomputers: how we see ourselves in our inventions' by Vaughan Bell in The Observer, July 26, 2014.
When computers arrived, we inevitably saw ourselves in our machines and the idea of the mind as an information processor became popular. Here, the mind is thought to consist of information processing networks where data is computed and transformed. One of the newest and most fashionable theories argues that the central function of the brain is to statistically predict new information. The idea is that the brain tries to minimise the errors it makes in its predictions by adjusting its expectations as it gets new information.
I've long thought about this same issue - that we overuse the metaphor when thinking about the brain. Rather than simply working with the idea that computers replicate certain functions of a brain, we start to believe that the brain must work like (or perhaps in a sense be) a computer.  But it does give us an approach that can bear fruit, sometimes for a long time.

01 March 2014

'Touching a Nerve' by Patricia Churchland (2013):

 A personal, non-academic review of the life physical.  In general I found 'Touching a Nerve' a good read - the subtitle 'The Self as Brain' is a kind of undertone to her (largely common-sensical) look at a variety of subjects, all based on a neuroscientific outlook.  Here are a few points of special interest to me.

On free will - she comes out pretty strongly against the Harris line that "free will is an illusion" - she says if it means there's no "contra-causal free will" then the observation is "only marginally interesting."
But what if free will is illusory means something else? What if it means, for example, that because there is a neural substrate for our deliberations and choices, we cannot have free will? Now I am totally at a loss. Why would anyone say such a thing? So what do they think is required to make genuine choices? A non-physical soul? Says who? (184)
What is not illusory is self-control, even though it can vary as a function of age, temperament, habits, sleep, disease, food and many other factors that affect how nervous systems function. (185) 
Churchland gives credit to Freud for being an early adopter (circa 1895) of the view that the unconscious processes are both mental and physical.
He understood that unconscious reasoning and intentions and thoughts need to be invoked to explain such things as complex perception (for example, heard speech as having a specific meaning) and complex motor acts (for example, speaking intelligibly and purposefully).
[...] 
He realized that he had essentially no idea what a vocabulary spanning the brain and behavioral science would look like. His conclusion was that we have no choice but to make do with what we know is a flawed and misleading vocabulary, namely, that of intentions, reasons, beliefs, and so on, to describe unconscious states. (201)
I liked this bit, on the interplay of conscious and unconscious:
Your conscious brain needs your unconscious brain, and vice versa. The character and features of your conscious life depend on your unconscious activities. And of course, conscious events can in turn have an effect on unconscious activities. (207) 
And this bit on conscious decision-making as a constraint satisfaction process:
Precisely what my dear old brain is doing as I go through these exercises is not entirely known. That is, we can think of it in terms of constraint satisfaction, but we are still a bit vague about what constraint satisfaction is in neural terms. Roughly speaking, we do know that in constraint satisfaction operations, the brain integrates skills, knowledge, memories, perceptions, and emotions and somehow, in a manner we do not precisely understand, comes to a single result. (219)
She disputes Dennett's position that language has to be part of consciousness, partly on personal grounds:
A further problem is that consciousness - mine, anyhow - involves so much more than speech. Indeed, we may experience much for which we have no precise linguistic characterization at all, such as the difference between the smell of cinnamon and the smell of cloves or the difference between feeling energetic and feeling excited, or what an orgasm is like. (250)
While other mammals do not have our kind of language, they do seem to communicate, and in terms of brain structure, they have very similar organs and patterns of activity.  She feels this indicates that many animals have some level of conscious awareness.

The overall picture she draws is of the brain as a looping structure, with some highly networked neurons able to convey signals to many other areas, to support the type of integration that we see.

I noticed that the book initially got a number of one-star reviews at Amazon, mostly short critiques of her overly reductionist viewpoint (an organized effort I presume!).  I did not find her to be overly reductionist in this book.  While she doesn't explicitly take on emergence as a topic, in the epilogue she does make this argument:
If, as seems increasingly likely, dreaming, learning, remembering, and being consciously aware are activities of the physical brain, it does not follow that they are not real. Rather, the point is that their reality depends on a neural reality. If reductionism is essentially about explanation, the lament and the lashing out are missing the point. Nervous systems have many levels of organization, from molecules to the whole brain, and research on all levels contributes to our wider and deeper understanding. (262)
Accessible, personal, and a good overview - I recommend it.






19 February 2014

Dennett and Harris wrestle on 'Free Will'

My take on the recent essays by Daniel Dennett and Sam Harris on the short Harris book 'Free Will' (2012, my original thoughts here).  Dennett wrote a 22 page review of the book in late January, and Harris put out his reply about a week later.  These are my brief notes on the exchange.

Dennett's main claims:

1. Harris is fighting a strawman - everyone basically agrees that the folk free will (i.e. libertarian free will) is wrong.  Dennett makes case for compatabilist notions (without claiming determinism is settled matter - says it is up to science to decide).

2. Harris is not taking the compatibilist position seriously and/or not in informed way.  Dennett does certainly appeal to the 'vast library' in a condescending way.

3. Harris seems fixed on one point in time, not dealing with dynamics over time.  Dennett rejects the Exact Replay scenario (rewind the clock, put literally everything back in place - see also #4 below in Harris section).  I liked this passage:
Harris ignores the reflexive, repetitive nature of thinking. My choice at time t can influence my choice at time t’ which can influence my choice at time t”.  How?  My choice at t can have among its effects the biasing of settings in my brain (which I cannot directly inspect) that determine (I use the term deliberately) my choice at t’. I can influence my choice at t’. I influenced it at time t (without “inspecting” it).  Like many before him, Harris shrinks the me to a dimensionless point, “the witness” who is stuck in the Cartesian Theater awaiting the decisions made elsewhere. That is simply a bad theory of consciousness.
4. Claims Harris is inconsistent about whether we can 'grab hold of our puppet strings' - perhaps also about how influence can work.

5. Dennett acknowledges we can't be 'ultimate cause' - infinite regress issue.  But over time we can "influence ourselves" (and others) in meaningful ways.

6. Takes issue with what Dennett claims is an evasion of responsibility.  I think this is the main area where probably the two men are actually probably not far apart practically, but are accusing each other of allowing for bad results.  Key lines:
Harris should take more seriously the various tensions he sets up in this passage.  It is wise to hold people responsible, he says, even though they are not responsible, not really. But we don’t hold everybody responsible; as he notes, we excuse those who are unresponsive to demands, or in whom change is impossible. That’s an important difference, and it is based on the different abilities or competences that people have.  Some people (are determined to) have the abilities that justify our holding them responsible, and some people (are determined to) lack those abilities. But determinism doesn’t do any work here; in particular it doesn’t disqualify those we hold responsible from occupying that role.  In other words, real responsibility, the kind the everyday folk think they have (if Harris is right), is strictly impossible; but when those same folk wisely and justifiably hold somebody responsible, that isn’t real responsibility!
Overall:  I actually found Dennett fairly straightforward, somewhat condescending, and probably drawing some unwarranted conclusions about the Harris position.  I found his points to be pretty interesting and worthy of consideration.

Harris main claims:

1. Says Dennett misunderstands his arguments.  Claims that libertarian free will is quite widely held still.

2. Harris is fully focused on taking down libertarian free will. He feels that removing that illusion will remove any rational reason for hatred - but leaves in place reasons for removing dangerous folks from society. Key lines:
And accepting incompatibilism has important intellectual and moral consequences that you ignore—the most important being, in my view, that it renders hatred patently irrational (while leaving love unscathed). If one is concerned about the consequences of maintaining a philosophical position, as I know you are, helping to close the door on human hatred seems far more beneficial than merely tinkering with a popular illusion.
3. Harris says he's not fully convinced of determinism, but thinks it must be nearly true.

4. There's a weird line about indeterminism re: the putt replay.  Harris writes:
That is, whatever his ability as a golfer, Austin would miss that same putt a trillion times in a row—provided that every atom and charge in the universe was exactly as it had been the first time he missed it. You think this fact (we can call it determinism, as you do, but it includes the contributions of indeterminism as well, provided they remain the same[3]) says nothing about free will.  
This seems to me to indicate the 'pseudo-random case' here, not a truly random indeterminism.  Not a big deal, but I found it odd to have the "provided they remain the same" qualifier.

5. As Dennett argues, I think Harris does not really grapple with the compatibilist argument. Does not engage the ideas of the dynamic system changing (influencing its future direction) over time.
Key lines: "In other words, your compatibilism seems an attempt to justify the conventional notion of blame, which my view denies. This is a difference worth focusing on."

Overall: Actually seems a bit more whiney than Dennett. Says he wanted debate or conversation, not to trade essays.  I think that Dennett mostly understands exactly where Harris stands.  Harris has his reasons for not wanting to grapple with compatibilism, but personally I think he hasn't shown that he really has a grip on Dennett's points.