25 November 2013

How much neuroscience in 'Social'?

Psychologist Matthew Lieberman does like the fMRI!  In his new book 'Social' (2013) the UCLA professor and Director of the Social Cognitive Neuroscience Lab makes the case for the neural underpinnings of our social learning and behavior.  The question that came to my mind though was how much of the message was basically social psychology (which is valuable, don't get me wrong, but not dependent on fMRI findings).

The book features many diagrams of brains, pointing out various regions that are active during different cognitive tasks.  In general the correlations of active areas to cognitive tasks can be very useful to better understand the brain structures, if not to actually understand how the cognitive tasks are achieved. Most illuminating are the findings where either the same area is used during different types of tasks, or where different areas are used for what seem to be very similar tasks.  I think it's probably valuable to combine these types of findings with traditional psychology to see what may be illuminated.

Lieberman's key claim is that our 'default' brain mode is used for so-called 'mentalizing' - sorting through the social world, trying to understand other people's motives and intentions. This is shown by the activation of certain brain areas both while explicitly thinking about social problems and when not attempting to do other cognitive tasks.

We typically use a particular prefrontal brain region for general cognition (reading, memorizing, computing, etc.), and it was thought that these areas were the critical to all learning.  But various studies have found a 'social encoding advantage' in learning using the mentalizing system to form overall impressions of people and their intentions rather than simple memorization of people's behavior.  The finding was that 'the folks making sense of the information socially have done better on memory tests than the folks intentionally memorizing the material.' (284)  From the neuroscience angle:
Jason Mitchell, a social neuroscientist at Harvard University, ran an fMRI version of the social encoding advantage study. As in a dozen studies before his, he found that when people were asked to memorize the information, activity in the lateral prefrontal cortex and the medial temporal lobe predicted successful remembering of that information later on. According to the standard explanation of the social encoding advantage, the same pattern should have been present or event enhanced when people did the social encoding task, but that isn't what happened. The traditional learning network wasn't sensitive to effective social encoding. Instead the central node of the mentalizing network, the dorsomedial prefrontal cortex, was associated with successful learning during social encoding. (284-5)
Lieberman suggests a number of interesting applications of this finding to change and hopefully improve the way we teach kids, who are intensely interested in the social world and not so interested in memorizing facts - such as by teaching history more in terms of the social dramas (rather than actions and dates), and math by engaging students as both tutors and tutees.

The book has sections on three stages of social development, which he terms connection, mindreading (theory of mind), and harmonizing - and argues that significant brain resources are devoted to maintaining connection with other people.  Harmonizing is about taking on many of the goals and behaviors of our social group (particularly active during adolescence).  The idea here is that our sense of self as supported in the brain is very susceptible to the social messages we receive.

Overall I liked this book - not that it really lives up to the subtitle 'Why Our Brains Are Wired to Connect' - it's more about 'How' than 'Why'. At its best it reminds us that we are truly social creatures, and the neuroscience helps illustrate that point.

Will we understand science in the future?

Tyler Cowen suggests not in his book 'Average Is Over' (2013).  The book is a bit of prognostication about the near future, looking mainly at how the use of computers is and will change our world.  The basic idea is that the people who can add value to computer work in some way will reap most of the rewards.

For the purposes of this blog, I thought the part about computer-driven science was most interesting. Cowen lists three reasons why science may become harder to understand:
1. In some (not all) scientific areas, problems are becoming more complex and unsusceptible to simple, intuitive, big breakthroughs.
2. The individual scientific contribution is becoming more specialized, a trend that has been running for centuries and is unlikely to stop.
3. One day soon, intelligent machines will become formidable researchers in their own right. (206)
And here's one attempt at a summary:
The remaining human knowledge of science will be very practical, very prediction-oriented, and well geared for improving our lives.  Of course those are all positive developments. Still, as a general worldview, science will not always be very inspiring or illuminating. The general educated public will to some extent be shut out from a scientific understanding of the world, and we will run the risk that they might detach from a long-term loyalty to scientific reasoning. (219)
It will be interesting to see how much of this thinking will apply to neuroscience.

23 October 2013

Brain decoding - how far can it go?

Kerri Smith has a good overview of the topic in "Brain decoding: Reading minds" at Nature.  The range of investigation goes from identifying the content of dreams to verifying whether someone is lying, to trying to understand the full process of how the brain can encode information.  But the starting point is fairly modest - trying to identify what object someone is looking at based on patterns in the visual area of the brain.  There's a good reason to start there:
Applying their techniques beyond the encoding of pictures and movies will require a vast leap in complexity. "I don't do vision because it's the most interesting part of the brain," says Gallant. "I do it because it's the easiest part of the brain. It's the part of the brain I have a hope of solving before I'm dead." But in theory, he says, "you can do basically anything with this."
But of course theory and practice are two different things, and there may be practical limits:
Devising a decoding model that can generalize across brains, and even for the same brain across time, is a complex problem. Decoders are generally built on individual brains, unless they're computing something relatively simple such as a binary choice — whether someone was looking at picture A or B. But several groups are now working on building one-size-fits-all models. "Everyone's brain is a little bit different," says Haxby, who is leading one such effort. At the moment, he says, "you just can't line up these patterns of activity well enough."
Using this kind of research to detect 'secret' product preferences seems pretty misguided to me.  But that doesn't stop some from trying!

01 October 2013

Decide what you think - it matters!

Tom Stafford at mindhacks.com writes on free will studies that indicate some interesting side effects of reading about a deterministic model.  Here's the bottom line:
This is a young research area. We still need to check that individual results hold up, but taken all together these studies show that our belief in free will isn’t just a philosophical abstraction. We are less likely to behave ethically and kindly if our belief in free will is diminished.
Personally I do think that regardless of the exact underlying physical mechanisms, one's choices help set the pattern for future behaviors, so best to act carefully and with fore-thought!

24 September 2013

Follow-up on the brain-to-brain experiments

Mind Hacks blog presents a nice short analysis of the UW experiment ("It is mind control but not as we know it"), written by Tom Stafford.  Previously I logged an entry for the brain-to-brain communication experiment conducted at University of Washington by Rajesh Rao.  Here's the gist from Stafford:
In information terms, this is close to as simple as it gets. Even producing a signal which said what to fire at, as well as when to fire, would be a step change in complexity and wasn’t attempted by the group. TMS is a pretty crude device. Even if the signal the device received was more complex, it wouldn’t be able to make you perform complex, fluid movements, such as those required to track a moving object, tie your shoelaces or pluck a guitar. But this is a real example of brain to brain communication.

As the field develops the thing to watch is not whether this kind of communication can be done (we would have predicted it could be), but exactly how much information is contained in the communication.

27 August 2013

Human-to-human brain communication

A very limited form of brain-to-brain communication is described in a story on research at the University of Washington: "Researcher controls colleague’s motions in 1st human brain-to-brain interface" by Doree Armstrong and Michelle Ma, Aug 27, 2013. The experiment used EEG signals via Skype to transmit signals of thoughts of simple movement, which the receiver got via transcranial magnetic stimulation - "a noninvasive way of delivering stimulation to the brain to elicit a response.... in this case, it was placed directly over the brain region that controls a person’s right hand."

I believe there are quite severe limits to the type of signal which could actually be transmitted and received via this mechanism, and the researchers confirm:

At first blush, this breakthrough brings to mind all kinds of science fiction scenarios. Stocco jokingly referred to it as a “Vulcan mind meld.” But Rao cautioned this technology only reads certain kinds of simple brain signals, not a person’s thoughts. And it doesn’t give anyone the ability to control your actions against your will.

Both researchers were in the lab wearing highly specialized equipment and under ideal conditions. They also had to obtain and follow a stringent set of international human-subject testing rules to conduct the demonstration.

“I think some people will be unnerved by this because they will overestimate the technology,” Prat said. “There’s no possible way the technology that we have could be used on a person unknowingly or without their willing participation.”

21 June 2013

Thin slicing the brain.

Creates a whole lot of data!  Nature reports on 'Whole human brain mapped in 3D' by Helen Shen, June 20, 2013.  The atlas was created from 7400 slices of a human brain, each thinner than a human hair, and nicknamed 'BigBrain'.  Here's the quick summary:
The brain is comprised of a heterogeneous network of neurons of different sizes and with shapes that vary from triangular to round, packed more or less tightly in different areas. BigBrain reveals variations in neuronal distribution in the layers of the cerebral cortex and across brain regions — differences that are thought to relate to distinct functional units.
Given that we are still working on a model for a simple worm with 302 neurons, there's obviously a long way to go with the full human brain.  But you gotta start somewhere, and I'm sure that having an accurate map will help (now just drawn from one example, but as they do more they will get an idea of the individual differences that are possible - I'll bet they can be pretty significant).

18 June 2013

What's the program in the Chinese Room?

It's really big and complicated!  That's my main takeaway from the Dennett writing on the Searle thought experiment (in Intuition Pumps and other books).

Here's the description of the scenario from Wikipedia:
It supposes that there is a program that gives a computer the ability to carry on an intelligent conversation in written Chinese. If the program is given to someone who speaks only English to execute the instructions of the program by hand, then in theory, the English speaker would also be able to carry on a conversation in written Chinese. However, the English speaker would not be able to understand the conversation. Similarly, Searle concludes, a computer executing the program would not understand the conversation either.
So - what might this program consist of?  Obviously there is no simple algorithm for taking in a string of Chinese characters one by one, and sending out a meaningful response character by character.  It would need all sorts of features, such as memory of the current conversation (to provide context to any given input), ability to distinguish questions from comments from opinions, and so much more.  Of course any such program could never be carried out in a step by step manual way by a person, unless you are willing to wait days if not months or years for responses!

If we simply assume that such a program exists and works as described, then it does seem to me that the outsider interacting with the room would grant a level of understanding to it.  The Watson program that can play Jeopardy seems to be getting relatively close to this level of sophistication, although it was built for the Answer/Question format only.

10 June 2013

What is a zombie?

Perhaps a whole lot more than you thought...  thinking on Dennett and the zombie concept, mostly drawn from Intuition Pumps (2013).

The philosophical concept of the zombie seems to start from a fairly simple definition: "a hypothetical being that is indistinguishable from a normal human being except in that it lacks conscious experience, qualia, or sentience." (drawn from Wikipedia)

Dennett's important point is that given this definition, it must be true that this zombie has very complex functional abilities - it must have functionality to support all sorts of things that normal humans can do - such as visual-auditory-olfactory-touch-taste sensory input, memory (perhaps slightly faulty), color recognition, facial recognition & linking face to name, ability to know that recognition of friends & family should trigger different behavior than recognition, say, of a politician, and so much more.  While by definition it does not have conscious experience, it's hard to say how one could ever confirm that this was the case.

Dennett then goes on to examine what he calls a subset of zombies, those which have "equipment that permits it to monitor its own activities, both internal and external, so it has internal (nonconscious) higher-order informational states that are about its other internal states." (p. 290). He calls these 'zimboes' but it's unclear to me whether such equipment is actually necessary in all zombies in order to produce the definitional behavior of being indistinguishable.  Dennett claims that only a zimbo can "hold its own in everyday interactions" - and that's my sense as well - to be indistinguishable from a normal human.  So I guess I'm unsure of why Dennett creates this new category of zimbo, if the zimbo has equipment that all zombies must have.

At a later point, Dennett examines a couple cases of non-normal human pathologies around facial recognition. Prosopagnosics are people who do not recognize people's faces, and people with Capgras delusion who can recognize people but believe they are 'imposters' - not truly the person they resemble. Research on brain function seems to indicate that there are at least two mechanisms at work in normal facial recognition - there's unconscious visual processing going on that also ties into emotional recognition, and there's conscious recognition of 'knowing who it is'.  If you can show that the unconscious mechanisms are broken (as apparently in the case of Capgras), leading to an altered conscious experience (sense of an imposter), then it appears the qualia is quite tightly tied to the unconscious mechanisms (but by definition qualia is supposed to be the conscious bit).

In other words, it's hard to draw a neat line around qualia when you look closely.  Again pointing out the difficulty of truly imagining the zombie.

I would argue that regardless of the brain mechanisms in use (and I do agree that many modules or mechanisms are used), the subjective experience as a whole is the emergent phenomenon of interest, and the prosopagnosic indeed has a different subjective experience than normal people, as does the Capgras subject. It is a fact that the Capgras subject is deluded about reality, but that fact doesn't alter the subjective experience of seeing people as impostors.

07 June 2013

Dennett and reports on consciousness...

I've been reading some Daniel Dennett lately (both Consciousness Explained and his new Intuition Pumps) and reflecting on many of his concepts.  Dennett argues for what he calls hetero-phenomenology as a method of scientifically researching consciousness, and this is basically taking reports from subjects in as neutral a way as possible (i.e. minimizing assumptions), and then trying to evaluate and explain these reports (i.e. are they right, what causes them, etc.).  In at least some descriptions, he seems to see the goal as simply a binary true/false evaluation - presumably on whether the perception matches what's really evident (as judged by objective observers).

Rather than a simple binary evaluation of right/wrong, I propose there are multiple angles that can and should be examined.
1. If the report includes descriptions of the world outside the subject, how do these compare to reports of 3rd parties? Or to other measures of reality?
2. If the report includes descriptions of internal sensations, how do these compare to what we know about the physical basis for the senses?
3. If the report includes an explanation or reason for the subject's experience, how does that compare to various existing theories and studies of behavior?

So in the case of the blind spot, I think most people will not report any blind spot, unless they follow a specific procedure, by staring at one point while moving another point that is off to the side closer or farther away until it can't be seen.  Physically we know there are no rods and cones at the back of the eye where the optic nerve exits.  So on criteria 1, there is actually a good match with reality because there appears to bear "filling in" of the spot, likely achieved because the eyes are usually shifting around, not starring at one point, and somehow a full visual field is produced (criteria 2).

Many visual illusions indicate that the perception includes features that are not really in the picture.  This seems to indicate that there is construction or filling in of apparent patterns.  In general I suspect this is a useful feature in dealing with the world, in particular for cases where what we are looking at is partially obscured.  

Now consider a case where the subject unknowingly has been given a drug that commonly causes hallucinations.  The subject reports that the furniture appears to be melting.  Here there is an incorrect match with reality.  If the subject reports that he may be "losing his mind", hopefully the observer will let them know that in fact the experience is due to a drug and will end soon.  The subject did not really know the reason for the experience.
While the subject's report about the outside world is clearly wrong, I don't think we can say that the subject's internal experience is wrong or untrue.  In this case we know the drug has the neurochemical properties which has one effect of altering the subject's experience.

In other cases of anomalous internal experiences, like a near death experience or an out-of-body experience, we can say that other observers in the immediate area could not detect any outside (i.e. real world) trace of it, but not that the report is wrong per se.  I think it's worth trying to both explain how such experiences can occur, and whether such experiences are a result of and/or can result in physical changes (such as neuronal rewiring).

26 May 2013

Jaron Lanier on social media

 from You Are Not a Gadget, his 2010 book.  Just found this bit intriguing, though I'm not sure I fully buy it:
Children want attention. Therefore, young adults, in their newly extended childhood, can now perceive themselves to be finally getting enough attention, through social networks and blogs. Lately, the design of online technology has moved from answering this desire for attention to addressing an even earlier developmental stage.

Separation anxiety is assuaged by constant connection. Young people announce every detail of their lives on services like Twitter not to show off, but to avoid the closed door at bedtime, the empty room, the screaming vacuum of an isolated mind. (p. 180)

20 May 2013

Minds, brains and woo.

Mind is not a redundant concept.  I'm with Andrew Brown in this short piece from the Guardian (17 June 2011), 'Mind, brains and woo.'  Here's the ending:
There is something very odd about the idea that the mind is an illusion that a brain has about itself (which is what is implied in a lot of this talk). Illusions are themselves things that only minds can have. An illusion, or a delusion, demands that there is a subjectivity being deluded. If a Buddhist says that the world is an illusion, at least they are being consistent, in that they suppose the ultimate reality is more like a mind – the kind of thing that can have an illusion or can be deluded. But no one can fool a rock, or a computer. Why should a brain be different? 

18 May 2013

Modeling simple worms.

302 neurons - It's a start!

'Is This Virtual Worm the First Sign of the Singularity' by Alexis Madrigal at the Atlantic May 17, 2013 (poor title, but good article) describes efforts to computationally model relatively simple life form, the C. elegans worm.  The project raises interesting questions about a number of things, including what is life? and what constitutes understanding?
"If you're going to understand a nervous system or, more humbly, how a neural circuit works, you can look at it and stick electrodes in it and find out what kind of receptor or transmitter it has," said John White, who built the first map of C. elegans's neural anatomy, and recently started contributing to the project. "But until you can quantify and put the whole thing into a computer and simulate it and show your computer model can behave in the same way as the real one, I don't think you can say you understand it."
This species of worm apparently has 959 cells, 302 of which are neurons.  These neurons form approximately 10,000 connections.  Part of what is unknown is just how much of the behavior of each cell needs to be fully modeled in order to simulate the worm's behavior.  I think it's probably a pretty good place to start.
I asked several researchers whether simulating the worm was possible.  "It's really a difficult thing to say whether it's possible," said Steven Cook, a graduate student at Yale who has worked on C. elegans connectomics. But, he admitted, "I'm optimistic that if we're starting with 302 neurons and 10,000 synapses we'll be able to understand its behavior from a modeling perspective." And, in any case, "If we can't model a worm, I don't know how we can model a human, monkey, or cat brain."

16 May 2013

Brain Stimulation - a net positive?

 'The Hidden Costs of Cognitive Enhancement' from Greg Miller, Wired (March 5, 2013), points to some findings that show there are tradeoffs.  There are a number of studies going on using low levels of electrical stimulation to particular areas of the brain while attempting cognitive tasks such as using a new abstract math system.  While there appear to be some findings for faster learning, researchers Cohen Kadosh and his colleague Teresa Iuculano at University of Oxford also looked for follow-on effects.
Those who had the parietal area involved in numerical cognition stimulated learned the new number system more quickly than those who got sham stimulation, the researchers report today in the Journal of Neuroscience. But at the end of the weeklong study their reaction times were slower when they had to put their newfound knowledge to use to solve a new task that they hadn’t seen during the training sessions. ”They had trouble accessing what they’d learned,” Cohen Kadosh said.
See also this story 'Electrical Brain Stimulation Helps People Learn Math Faster' - also by Miller - May 16, 2013.

23 April 2013

Who's in charge?

Gazzaniga's 2011 book reviews the neuroscientific evidence.

Who's in Charge? by Michael Gazzaniga (he's famous for the early split brain studies, finding some of the ways the two hemispheres differ) is a pretty light read on current neuroscience, with the angle of looking at the idea of free will and what it means for personal responsibility and law.  He covers some of his own findings, in particular around the left brain 'interpreter' which appears to be very good at making up stories to fit the apparent evidence.

While Gazzaniga lays out the strict reductionist/determinist viewpoint very effectively, he backs away from that outlook, and describes interlocking, complementary systems of upward and downward causality.  He quotes a computer analogy from David Krakauer that makes good sense to me:
We do not program at the level of electrons, Micro B, but at a level of a higher effective theory, Macro A (for example, Lisp programming) that is then compiled down, without loss of information, into the microscopic physics. Thus, A causes B. Of course, A is physically made from B, and all the steps of the compilation are just B with B physics. But from our perspective, we can view some collective B behavior in terms of A processes. (p. 139)
Moving over to the world of the brain, he takes a shot at the importance put on Libet's findings of brain activity preceding conscious awareness of movement:
What difference does it make if brain activity goes on before we are consciously aware of something? Consciousness is its own abstraction on its own time scale and that time scale is current with respect to it. Thus, Libet's thinking is not correct. That is not where the action is, any more than a transistor is where the software action is. (p. 141)
He goes on to investigate the idea of the social mind - how our behavior is shaped and constrained by the actions of others, some directly and some more indirect via cultural norms.  This is a whole other emergent level of causation that the reductionist viewpoint cannot really describe.

The last portion is about the legal aspects, and here I was a bit surprised that he did not reference David Eagleman since that his a big area of his focus, and overall it felt a bit too light a review to really do it justice.

But overall I enjoyed the book, agreed in general with his scientific and philosophical take on things.

Here's a review from WSJ online by Raymond Tallis.

08 April 2013

World Wide Mind - is it really coming? Maybe so.

Michael Chorost's 2011 book World Wide Mind makes the case for the 'coming integration of humanity, machines and the internet'.  Taking off from Rebuilt, the book predicts that over the next 30 years or so many people will use direct brain implants to both receive inputs from others and broadcast out meaningful impulses in some fashion.

Overall I felt the science here is pretty believable.  We are learning very quickly about the brain, and how to detect neuron groups (cliques) that correlate with certain concepts.  One can imagine this work continuing at a rapid pace.  Chorost's experience with cochlear implants lends credibility to the technical progress in the area of direct stimulation of neurons to produce valuable sensory input.  He describes some nano-technology possibilities for putting outside tech in touch with many areas of the brain (more invasive in terms of its reach, less invasive in terms of not requiring head surgery).

The picture he paints in the book is one of constant reception of low-level inputs from other people you are linked to, for instance if a co-worker has some new ideas, you could receive inputs about the cluster of concepts connected to the idea, and along with standard written communication this could help trigger new ideas in you.  Chorost rightly points out that these inputs are not going to give you the other person's experience - rather you will get some inputs that will trigger your own experience given your own memories, etc.

There are two main points that I think deserved more attention in this book.  The first is the why question.  Why will people feel compelled to have these types of inputs?  One example from the book did not make this case well at all:
Having brainlike computers would greatly simplify the process of extracting information from one brain and sending it to another.  Suppose you have such a computer, and you're connected with another person via the World Wide Mind.  At the moment you're observing each other's visual experiences.  You see a cat on the sidewalk in front of you.  Your rig is able to watch neural activity in your neocortex with its optogenetic circuitry.  It sees activity in a large percentage of neurons constituting your brain's representation of a cat.  To let your friend know you're seeing a cat, it sends three letters of information - CAT - to the other person's implanted rig.  That person's rig activates her brain's invariant representation of a a cat, and she sees it. (p. 135)
Wow - distinctly underwhelming!  All this tech to send a three letter message, and wouldn't the receiver have the same experience if they received a text message with the word 'cat'?  To be fair, Chorost does give somewhat more compelling examples later in the book, but after this weak start I was pretty skeptical.

The second issue - let's say everyone had such technology implanted.  It seems to me that many commercial forces would be chomping at the bit for the ability to send inputs to everyone constantly - fast food joints might want to send you ideas of hamburgers and milk shakes, and so on.  And what kind of inputs might a government wish to send out to citizens?  Chorost spends a couple pages (196-8) on these types of questions in the final chapter, but for me it was a case of too little too late.

Overall an interesting book, one that imagines some potential developments in the field, is perhaps a bit optimistic about our ability to solve each hard problem that arises, and is sincere in its investigations of how such technology could connect us in new, meaningful ways.  The book also includes some touchy-feely material about a workshop the author attended, and it may turn off some readers.

Here's a link to the NY Times review by Katherine Bouton.  Here's another more critical blog reaction from Backreaction (two physicists).

07 April 2013

Wagging a rat's tail via EEG signal - not much here.

This week saw many stories on the experiment done by Seung-Schik Yoo of Harvard Medical School.

"Interspecies telepathy: human thoughts make rat move" by Sara Reardon in NewScientist sums it up pretty well (bad title by the way, I'd say, since wires were involved!).
The human volunteers wore electrode caps that monitored their brain activity using electroencephalography (EEG). Meanwhile, an anaesthetised rat was hooked up to a device that made the creature's neurons fire whenever it delivered an ultrasonic pulse to the rat's motor cortex.
But the trigger detected by the EEG was simply a change in the person's concentration, which was enough to send the pulse. As Ricardo Chavarriaga (of the Swiss Federal Institute of Technology) comments:
More importantly, Chavarriaga says, the experiment will not be meaningful until the human's intention corresponds with the rat's action. For instance, a person might imagine moving their left hand to move the rat's left paw. Yoo's approach would not be of any use for that because it only tells us that a person's mental focus has changed, not what the thought or sensation behind the change is.
So overall I think this is not so interesting, just one more fairly minor step.

01 April 2013

Michael Chorost's Rebuilt (2005).

Tells of regaining hearing via cochlear implant - 16 pins that feed impulses into Chorost's neurons near the ear.  Those of us with organic hearing are using a bunch of tiny hairs to pick up sound waves, and we likely don't think too much about how it all works, or whether we are really getting a 'realistic' sonic landscape.  But Michael Chorost suddenly lost his hearing in 2001, and chose to get the surgery to have an implant placed in his head, and Rebuilt tells the story in a compelling fashion by mixing the science, his experience of hearing, the concept of the cyborg, and his thoughts on communicating and connecting with other people.

While I've read a number of things that describe the neuroscience of vision in some detail, this was the first book I've found that does something similar for the auditory sense.  The full rig that makes the implant useful includes a microphone, a processor that can run software to scan the sound and adjust/filter the input to create output for the 16 pins, and a radio relay unit that is magnetically linked to the implant on the outside of the skin on the skull.  Many rounds of mapping are performed to fine tune the processing to the specifics of the way the pins transmit data into the brain, and new software in the processor can make improvements by (for example) increasing the transmission rate.

One of the stories that really brought the whole concept home for me was telling of hooking up the rig to a CD player, such that no sound waves were produced, simply electronic patterns, which were transmitted to the implant and created the experience of hearing for Chorost.  Just as our vision is a creation of the brain, so too our auditory sense.  And it's quite interesting to track how his hearing ability improves over time, both due to software enhancement and neuro-plasticity of the brain.

Chorost mulls over the concept of the cyborg quite extensively through the book, contrasting this concept of the technologically enhanced human with the quite different notion of a robot.  Knowing that your hearing is dependent on the hardware and software running on various gizmos will do that to a person.  He stresses the point that his implant does not improve on healthy human hearing, and takes issue with some of the more extravagant claims of Kevin Warwick (who wrote in I, Cyborg, "We will interface with machines through thought signals.")  We don't really have any clue yet how thoughts might be represented as an interface to the brain - in this situation what's being transmitted are representations of sound, not thought!


28 February 2013

The brains of rats...


Go better together!  Curious story today in the Guardian, "Brains of rats connected to allow them to share information via internet" by Ian Sample (Feb 28, 2013).  The internet part of it seems pretty gimmicky, not sure why that's important, and it's a little unclear to me what is really going on here.  The substantive portion is described like this:
The scientists first demonstrated that rats can share, and act on, each other's sensory information by electrically connecting their brains via tiny grids of electrodes that reach into the motor cortex, the brain region that processes movement.

The rats were trained to press a lever when a light went on above it. When they performed the task correctly, they got a drink of water. To test the animals' ability to share brain information, they put the rats in two separate compartments. Only one compartment had a light that came on above the lever. When the rat pressed the lever, an electronic version of its brain activity was sent directly to the other rat's brain. In trials, the second rat responded correctly to the imported brain signals 70% of the time by pressing the lever.

Remarkably, the communication between the rats was two-way. If the receiving rat failed at the task, the first rat was not rewarded with a drink, and appeared to change its behaviour to make the task easier for its partner.
So it sounds like impulses from one rat, which were generated upon a certain movement, were sent in some format into the other rat's brain, into its motor cortex.  That in turn seemed to influence the movements/'decisions' of the receiving rat.

Some questions I have about this - (1) how much of a learning period was involved? 2) how many notable movement/decision options did the receiving rat actually have?  3) was there some timing boundary within which the rat had to take action to be seen as successful communication? 4) what was the success rate? Maybe it's all answered in the paper.

For me it raises the question of whether essentially any patterned impulses received by certain brain areas could be successfully 'interpreted' - and to what level of discrimination.  In this case the receiver may be doing little more than using the reception as a timing signal to make a movement, but perhaps it can go deeper than that.

The article ends with a good reminder that there's plenty we don't know in this area:
Very little is known about how thoughts are encoded and how they might be transmitted into another person's brain – so that is not a realistic prospect any time soon. And much of what is in our minds is what Sandberg calls a "draft" of what we might do. "Often, we don't want to reveal those drafts, that would be embarrassing and confusing. And a lot of those drafts are changed before we act. Most of the time I think we'd be very thankful not to be in someone else's head."
(H/t to twitterers @pourmecoffee and @neurophilosophy)

Update:  Here's another report from NYT:  "One Rat Thinks, and Another Reacts"

27 February 2013

Knowing vs. Understanding

Came across this Slate post "Explain it to me again, computer" by Samuel Arbesman (Feb 25, 2013).  Here's the initial question:
But whether or not science is always moving forward or whether we think we have the final view of how the world works (which we almost certainly do not), we pride ourselves on our ability to understand our universe. Whatever its complexity, we believe that we can write down equations that will articulate the universe in all its grandeur.

But what if this intuition is wrong? What if there are not only practical limits to our ability to understand the laws of nature, but theoretical ones?
I don't think I concur with this intuition, but here's the interesting twist:
A computer program known as Eureqa that was designed to find patterns and meaning in large datasets not only has recapitulated fundamental laws of physics but has also found explanatory equations that no one really understands. And certain mathematical theorems have been proven by computers, and no one person actually understands the complete proofs, though we know that they are correct.
I think in some ways technology moves ahead in this way, by finding 'true' behavior (rules that work in the world), and exploiting it to enable new techniques.  Science can often come along later with theoretical justification and explanation.  But to think that we may never really understand some of these findings, and just accept them, does feel a little de-stabilizing.

(H/t Andrew Sullivan)

28 January 2013

Sleep and memory. Do they go together?

New study says yes.  A report in NY Times (Jan 27, 2013), "Aging in Brain Found to Hurt Sleep Needed for Memory" by Benedict Carey reports on new findings.
Previous research had found that the prefrontal cortex, the brain region behind the forehead, tends to lose volume with age, and that part of this region helps sustain quality sleep, which is critical to consolidating new memories. But the new experiment, led by researchers at the University of California, Berkeley, is the first to directly link structural changes with sleep-related memory problems.

The findings suggest that one way to slow memory decline in aging adults is to improve sleep, specifically the so-called slow-wave phase, which constitutes about a quarter of a normal night’s slumber.
Researchers are trying to use electrodes on the skull to help recreate the wave patterns that are associated with healthy sleep.

26 January 2013

How much are we truly aware of?

Not so much, according to David Eagleman's "Incognito", which describes how much flies under our conscious radar.  Overall I thought this book was quite good, a better introduction to these topics than several others I've read.

I won't write up too much about it, but there were a few points that I found new and worth noting.  First was this:  "Throughout the brain there is as much feedback as feed-forward" (p. 46).  This is described in reference to the visual system, with an example that the act of imagining a scene will cause the low-level visual system to light up with activity.  In this sense we create our visual world internally, and influence the processing of signals that come in via our eyes.  But I think the general principle is extremely important, that mental acts can cause all sorts of brain activity, and very likely "re-wire" neurons.

The other area that Eagleman is especially interested in has to do with personal responsibility and the legal system.  He argues that the quest to find blame is less useful than taking a more forward-oriented approach - will a person be likely to continue to be a danger to society, or is that a low likelihood.  He argues that there are all sorts of reasons why a person might have acted in a certain way, from contextual cues to biological reasons (for example a brain lesion could have eroded certain mental functions), but if the context is unlikely to recur, or the biological problem has been cured, then future behavior is unlikely to repeat the crime.  Obviously this then ties into drug addiction and our 'war on drugs' which has created such a large prison population.

Eagleman is apparently working on his next book on neuroplasticity, and I look forward to it.


08 January 2013

The Self Illusion - Bruce Hood (2012)

Quick review: typos, rehashes, some good points.

Bruce Hood is a UK professor in psychology, and he studies child development.  The Self Illusion is his 2012 book, subtitled 'How the Social Brain Creates Identity' - but unfortunately he doesn't actually spend all that much time directly focused on ideas of the 'social brain'.  Too much of the book tells of well-worn research findings from Libet, Milgram, Zimbardo, etc.  And the book feels rushed and a little sloppy due to the many typos (at least a dozen), along with the format which feels like a compendium of longish blog posts not fully tied together.

That said, there are a few things I felt worth covering from the book.  Hood makes it clear that his meaning of 'illusion' is not that there's nothing there, but that it is not what it seems.  The main point of the book seems to be that there is no truly singular, consistent 'self' - we all behave in different ways depending on the social & environmental context we find ourselves in, plus we are frequently driven by forces and incentives of brain processes that we aren't directly aware of.  However we all tend to feel that we are autonomous beings with some level of free will - and there appear to be very healthy psychological benefits from that mindset.

The information on certain aspects of child development were interesting.  Hood writes on p. 46 "So long as our interactions are timed to the babies' activity, they pay attention to us." In a sense babies are selecting the adults that are most attentive to them, those likely to be good care-givers.  There's also interesting material on critical periods of development, which can be devastating if not fulfilled.

There is a bit of coverage on split-brain patients, and I found the internal inconsistency of the book to be kind of typical, I'd say based on the mind/brain confusion.  On page 130 there's a section titled 'Being in Two Minds' that introduces the split-brain idea and research by Michael Gazzaniga.  But later on page 233 he notes "Gazzaniga has proposed that there are not two separate minds or selves in these split-brain patients."  I think this is an important point - if mind is the subjective experience, then even though communication between the lobes of the brain is incomplete, and behavior is not fully coordinated, the subjective mind experience is singular.

The book includes a chapter of musing about the impact of the internet and social media in particular.  It's probably too early to draw conclusions, but I think Hood is right to question how this context will influence the social development of people, and what it may do to one's sense of self.

Here's a short interview between Hood and Sam Harris.