02 May 2023

AI reads the brain?

Well, long time no posting!

The post "A.I. trained to read minds and translate private thought to text via brain scans" from BoingBoing caught my eye. Here with extensive training on a specific person's brain activity while listening to spoken text, is able to correlate later brain activity (while watching silent films or thinking of speaking) and do pretty well at reconstructing at least some of what the person "had in mind". Note though that patterns for one person do not carry over to other people.

This language-decoding method had limitations, Dr. Huth and his colleagues noted. For one, fMRI scanners are bulky and expensive. Moreover, training the model is a long, tedious process, and to be effective it must be done on individuals. When the researchers tried to use a decoder trained on one person to read the brain activity of another, it failed, suggesting that every brain has unique ways of representing meaning.

14 January 2018

Worms re-grow brains with old memories?

How much do we really know about memory storage?  This story from National Geographic may make you think again: "Decapitated Worms Re-Grow Heads, Keep Old Memories" by Carrie Arnold (dated July 16, 2013).
After the team verified that the worms had memorized where to find food, they chopped off the worms’ heads and let them regrow, which took two weeks. 
Then the team showed the worms with the regrown heads where to find food, essentially a refresher course of their light training before decapitation. 
Subsequent experiments showed that the worms remembered where the light spot was, that it was safe, and that food could be found there. The worms’ memories were just as accurate as those worms who had never lost their heads.

16 May 2016

What do we really know about Matter?

Two recent articles hit a similar theme, pushing the notion that our experience (consciousness) is in some sense on firmer ground that our understanding of physical matter.

The first I came across today via Twitter: "Consciousness isn't a Mystery: It's Matter" by philosopher Galen Strawson in the New York Times (May 16, 2016).  Here's the gist:
... we know exactly what consciousness is — where by “consciousness” I mean what most people mean in this debate: experience of any kind whatever. It’s the most familiar thing there is, whether it’s experience of emotion, pain, understanding what someone is saying, seeing, hearing, touching, tasting or feeling. It is in fact the only thing in the universe whose ultimate intrinsic nature we can claim to know. It is utterly unmysterious.

The nature of physical stuff, by contrast, is deeply mysterious, and physics grows stranger by the hour. (Richard Feynman’s remark about quantum theory — “I think I can safely say that nobody understands quantum mechanics” — seems as true as ever.) Or rather, more carefully: The nature of physical stuff is mysterious except insofar as consciousness is itself a form of physical stuff
I think this is on the right track...  emphasizing the primacy of experience, but not claiming that experience is necessarily exposing the actual nature of 'physical stuff'.  It's easy to assume we have a good handle on Matter, when in fact we've only discovered some rules about it, along with the working assumption that whatever it is, if you get a complex enough organization you get what we think of as conscious experience.

Back in April, Amanda Gefter wrote on and interviewed cognitive scientist (and author of Visual Intelligence) Donald Hoffman in an article entitled "The Case Against Reality" in The Atlantic.  Hoffman argues that our evolutionary path driven by fitness means that we have no reliable means of accessing what's really out there.
The idea that what we’re doing is measuring publicly accessible objects, the idea that objectivity results from the fact that you and I can measure the same object in the exact same situation and get the same results — it’s very clear from quantum mechanics that that idea has to go. Physics tells us that there are no public physical objects. So what’s going on? Here’s how I think about it. I can talk to you about my headache and believe that I am communicating effectively with you, because you’ve had your own headaches. The same thing is true as apples and the moon and the sun and the universe. Just like you have your own headache, you have your own moon. But I assume it’s relevantly similar to mine. That’s an assumption that could be false, but that’s the source of my communication, and that’s the best we can do in terms of public physical objects and objective science.
Presumably since all humans are on the same evolutionary path, we do indeed have similar assumptions and experiences. But it may be much harder to communicate with beings coming from different evolutionary pressures.

21 October 2015

More news on the worm C. elegans - a few more neurons?

 Been awhile since I've been active here, but this exciting worm news certainly rates a post.  To get caught up, check this previous post: Modeling the Worm!

In this story from Nature, "Surprise 'mystery' neurons found in male worms" the title gives it away.
The neurons help the worms learn when to prioritize mating over eating, revealing how a seemingly simple brain can be capable of a complex learned behaviour — and one that differs between the sexes.

Caenorhabditis
 elegans worms are the model animal of choice for many neuroscientists, because their neural circuits are so simple that they can be mapped in full. They have two sexes: hermaphrodite and male. Hermaphrodites, the best studied, have just 302 neurons, but males have more — the MCMs raise their total to 385 neurons.
So it looks like there's more work to be done to get a good handle on the worm brain.

22 December 2014

Modeling the worm!

Recent physical simulation of the c. elegans 302-neuron worm with Lego, as reported here at the I Programmer site on Nov. 16, 2014, "A Worm's Mind In A Lego Body" by Lucy Black.  This is a nice follow-up to my May 2013 post Modeling Simple Worms.  The claim here is that the computer model of these neurons is able to produce simple physical behavior that is like the worm behavior (note that the worm is very small and only capable of simple behavior).  There's a video that shows the lego model in action.
It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward.

The key point is that there was no programming or learning involved to create the behaviors. The connectome of the worm was mapped and implemented as a software system and the behaviors emerge.

The connectome may only consist of 302 neurons but it is self-stimulating and it is difficult to understand how it works - but it does.
As for the claim about the worm's mind...  well we don't know much about a worm's mind, so how could we know if this model captures it?

The simulation project is run by Tim Busbice at The Connectome Engine.  Another story on the simulation at the New Scientist site "First digital animal will be perfect copy of real worm."

20 December 2014

Some AI items - what are the limits?

Rodney Brooks pushes back: "artificial intelligence is a tool, not a threat" (Nov 10 at the Rethink Robotics blog) - here's a piece:
Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data.  This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine.  But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.
Gary Marcus has also been writing on the topic, such as this piece from October 24 at the New Yorker: "Why We Should Think About the Threat of Artificial Intelligence":
Barrat's core argument, which he borrows from the A.I. researcher Steve Omohundro, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence. In Omohundro's words, "if it is smart enough, a robot that is designed to play chess might also want to be build a spaceship," in order to obtain more resources for whatever goals it might have.
Marcus chats with Russ Robert on his Econtalk podcast posted Dec. 15.

25 October 2014

What can the mind-set do to the body?

 That's the question at issue in the recent NYT article by Bruce Grierson "What if Age Is Nothing but a Mind-Set?" covering the work of Harvard psychologist Ellen Langer.  Langer has done various researches where people are "primed" with positive information about their situation - that they have control, are responsible, etc. and seen frequent improvements.

The title study done in 1981 involved bringing a group of men in their 70s into a controlled environment simulating 1959 for 5 days, and then evaluated them on various measures.  In various ways they appeared to be "younger" afterwards - in manual dexterity, in sitting taller.  Other previous studies had led her to this way of thinking about priming:
To Langer, this was evidence that the biomedical model of the day — that the mind and the body are on separate tracks — was wrongheaded. The belief was that “the only way to get sick is through the introduction of a pathogen, and the only way to get well is to get rid of it,” she said, when we met at her office in Cambridge in December. She came to think that what people needed to heal themselves was a psychological “prime” — something that triggered the body to take curative measures all by itself.
If we believe the mind to be the result of a physical process, then I don't see it as too far-fetched to believe that different mind-sets can manifest in different physical outcomes.  This is of course related to placebos generally:
Langer came to believe that one way to enhance well-being was to use all sorts of placebos. Placebos aren’t just sugar pills disguised as medicine, though that’s the literal definition; they are any intervention, benign but believed by the recipient to be potent, that produces measurable physiological changes. Placebo effects are a striking phenomenon and still not all that well understood. Entire fields like psychoneuroimmunology and psychoendocrinology have emerged to investigate the relationship between psychological and physiological processes. Neuroscientists are charting what’s going on in the brain when expectations alone reduce pain or relieve Parkinson’s symptoms. More traditionally minded health researchers acknowledge the role of placebo effects and account for them in their experiments. But Langer goes well beyond that. She thinks they’re huge — so huge that in many cases they may actually be the main factor producing the results.
Now Langer is taking the research to an extreme - setting up a positive situation for women with stage 4 breast cancer, which the medical establishment essentially has no answers for. While it's hard to believe that this will work, it still seems to me to be an avenue worth pursuing.

12 October 2014

Poor title edition: "Are We Really Conscious?" in NYT

Article is by Michael Graziano from Princeton, "Are We Really Conscious?" posted Oct. 10, 2014.  I think the better headline would be "What Are We Really Conscious Of?" (I recognize that the authors usually don't pen the headlines).  In any case, Graziano has a theory about our awareness being a distortion of the reality, and that basic point is not a new one.  But there is one line which I object to, and it seems like Graziano himself contradicts it in the article.

Midway through, he writes: "But the argument here is that there is no subjective impression; there is only information in a data-processing device." ('Device' here referring to a brain). This is the classic reductionist move - it's only data processing. There's a non-recognition of the potential for levels of complexity and organization that give rise to interesting phenomena in their own right.  The reduction of a brain, which is a really interesting thing, not yet well understood, to a "device".

Further in the paragraph:  "The brain’s cognitive machinery accesses that interlinked information and derives several conclusions: There is a self, a me; there is a red thing nearby; there is such a thing as subjective experience; and I have an experience of that red thing."  Now this is at least a little more open to interesting investigation - what might we mean by self? what is a subjective experience?

My conclusion these days is that a coherent view of 'self' is at the very least the entire organism (i.e. my whole body), and probably it needs to go beyond that, to extend some way into the environment.  So does my self have experiences - yes, I think it certainly does.  My body (which obviously includes my brain) reacts to experiences.  Experiences appear to have an information processing aspect or basis, and that's very interesting, but to end there is missing at least half the story (IMHO).

By the end, Graziano writes: "In this theory, awareness is not an illusion. It’s a caricature. Something — attention — really does exist, and awareness is a distorted accounting of it."

Ok, so which is it - do we have subjective experience or not?