08 March 2026

Brain connectome model running in computer

 Well this is pretty interesting.

In 2024, Eon senior scientist Philip Shiu and collaborators published in Nature a computational model of the entire adult Drosophila melanogaster brain, containing more than 125,000 neurons and 50 million synaptic connections, built from the FlyWire connectome and machine learning predictions of neurotransmitter identity. That model predicted motor behavior at 95% accuracy. But it was disembodied: a brain without a body, activation without physics, motor outputs with nowhere to go.

Now the brain has somewhere to go.

The blog post "The First Multi-Behavior Brain Upload" with accompanying video, describes how they've integrated "Eon’s connectome-based brain emulation with a physics-simulated fly body" and show the resulting virtual fly moving in a space.

Seems like this is mostly based around physical movement. Is the claim that this connectome is a generic model for the entire species of fly, or is it in some way unique to a particular fly. If this approach is scaled up to a mouse, as described, could it capture memories of a particular mouse (say of how to get through a maze?).


24 July 2025

Simulating the worm is not so easy!

I've done a number of posts on the C. Elegans worm that has 302 neurons, apparently the simplest 'brain' of any creature (it's less than a hair width's long). So I was interested in Wired's article "The Worm That No Computer Scientist Can Crack" by Claire L. Evans, posted March 26, 2025, where she investigates the efforts to create a software simulation of the worm, to model its movements, etc. Unsurprisingly, the latest approach is all about data:

Use genetic imaging technology to activate each neuron in the worm’s nervous system one by one, measuring its effect on the other 301. Repeated hundreds of thousands of times in parallel experiments, this methodical process should hoover up enough data to give the computational folks, finally, something to work with—enough, even, to “reverse engineer” the worm completely.

It’s an ambitious proposal, one that will require an unprecedented level of collaboration between some 20 different worm labs. Gal Haspel, a computational neuroscientist at the New Jersey Institute of Technology and the lead author on the reverse engineering paper, estimates that pulling it off may take up to 10 years, cost tens of millions of dollars, and require something in the neighborhood of 100,000 to 200,000 real-life worms. In the process, it will generate more data about C. elegans than has been collected in all of science to date.

Looks like there's still a lot of work to do! 


27 March 2024

A Brief History of Intelligence (2023) - Max Bennett

Max Bennett has been working in AI for consumer markets, and this book is an ambitious overview that might better be titled 'Breakthroughs in Human Brain Development'.

His history is organized by five evolutionary brain 'breakthroughs' that created/supported new behaviors and capabilities in the development chain along the way toward Homo Sapiens. In brief these are: steering, reinforcing, simulating, mentalizing and speaking. In part he uses this framework to discuss how AI techniques have attempted to create similar capabilities and solve similar problems. For instance in the reinforcing section (esssentially about learning) there are challenges: how does the organism manage to learn something new without losing the last thing it learned, and how to learn when the outcome of an action does not have an immediate result - i.e. overcoming temporal delays.

In the speaking section, he indicates that recent findings indicate not so much that humans have new neocortex areas that support language and speaking, but that somehow we developed base instinctual behaviors that support learning language - proto-conversations with babies, and joint attention - and the other primates don't have these instincts. He describes this capability as the original singularity, since it allows for the growth of knowledge over time that just builds and builds. He thinks that modern man does not have any actual brain capabilities that weren't available tens of thousands of years ago, but we do have the accumulation of knowledge of many generations.

Note that this book does not use the word 'consciousness' at all (as far as I noticed), and the word 'mind' is used sparingly, mostly in the term 'theory of mind' (covered in the mentalizing section). One paragraph stood out to me in particular that indicates a pretty reductionist attitude - this is from page 301:

When we talk of these inner simulations, especially in the context of humans, we tend to imbue them with words like concepts, ideas, thoughts. But all these things are nothing more than renderings in the mammalian neocortical simulation. When you "think" about a past or future event, when you ponder the "concept" of a bird, when you have an "idea" as to how to make a new tool, you are merely exploring the rich three-dimensional simulated world constructed by your neocortex. It is no different, in principle, than a mouse considering which direction to turn in a maze.

While in some sense I agree that these processes are indeed the workings of the brain, I think there's a lot more to be explored here than Bennett describes. Still, I felt this was a worthwhile and interesting book, and I'd like to think about it more in comparison with Kevin Mitchell's 'Free Agents' which covers a similar evolutionary path and history.

More on A Brief History of Intelligence.


20 December 2023

Honest Placebos

I've come across a few things referencing placebos lately, in particular 'transparent' or 'open' placebos where the fact that it contains no known effective ingredient is not hidden.

One is a link to this research on "Effects of open-label placebos in clinical trials: a systematic review and meta-analysis" from Nature dated Feb 16, 2021:

Open-label placebos (OLPs) are placebos without deception in the sense that patients know that they are receiving a placebo. The objective of our study is to systematically review and analyze the effect of OLPs in comparison to no treatment in clinical trials.

We found a significant overall effect (standardized mean difference = 0.72, 95% Cl 0.39–1.05, p < 0.0001, I2 = 76%) of OLP. Thus, OLPs appear to be a promising treatment in different conditions but the respective research is in its infancy.

Then in perusing Andy Clark's latest book The Experience Machine, which posits the brain as a prediction engine, constantly engaging with sensory input both consciously and unconsciously to enable action, he concludes with some material about what he refers to as 'honest' placebos:

Honest placebos appear to work by activating subterranean expectations through superficial indicators of reliability and efficacy such as good packaging and professional presentation (foil and blister packs, familiar font, size and uniformity of the pills, and so on). This is because - as we have seen - the bulk of the brain's prediction empire is nonconscious.

Clark reviews a number of other findings in his 'Hacking the Prediction Machine' chapter, and in a sense concludes:

In the end, it looks like anything that can be done to increase our confidence in an intervention, procedure, or outcome is likely to have real benefits. 

He also describes use of certain psychedelic drugs as having the potential to 'reset' the prediction machine in very useful ways.

18 December 2023

Conversing with a whale

This Dec. 12, 2023 report from the Seti Institute, Whale-SETI: Groundbreaking Encounter with Humpback Whales Reveals Potential for Non-Human Intelligence Communication seems encouraging.

In response to a recorded humpback ‘contact’ call played into the sea via an underwater speaker, a humpback whale named Twain approached and circled the team’s boat, while responding in a conversational style to the whale ‘greeting signal.’ During the 20-minute exchange, Twain responded to each playback call and matched the interval variations between each signal.

I've long thought it would make sense to attempt communication with the intelligent species on our own planet! 

19 November 2023

Evolution and Free Will

Pulled from the blog list, the recent Brain Science podcast with Kevin Mitchell is worthwhile.

As with his new book, it's titled "Free Agents: How Evolution Gave Us Free Will" and was posted Oct 27, 2023.

02 May 2023

AI reads the brain?

Well, long time no posting!

The post "A.I. trained to read minds and translate private thought to text via brain scans" from BoingBoing caught my eye. Here with extensive training on a specific person's brain activity while listening to spoken text, is able to correlate later brain activity (while watching silent films or thinking of speaking) and do pretty well at reconstructing at least some of what the person "had in mind". Note though that patterns for one person do not carry over to other people.

This language-decoding method had limitations, Dr. Huth and his colleagues noted. For one, fMRI scanners are bulky and expensive. Moreover, training the model is a long, tedious process, and to be effective it must be done on individuals. When the researchers tried to use a decoder trained on one person to read the brain activity of another, it failed, suggesting that every brain has unique ways of representing meaning.

14 January 2018

Worms re-grow brains with old memories?

How much do we really know about memory storage?  This story from National Geographic may make you think again: "Decapitated Worms Re-Grow Heads, Keep Old Memories" by Carrie Arnold (dated July 16, 2013).
After the team verified that the worms had memorized where to find food, they chopped off the worms’ heads and let them regrow, which took two weeks. 
Then the team showed the worms with the regrown heads where to find food, essentially a refresher course of their light training before decapitation. 
Subsequent experiments showed that the worms remembered where the light spot was, that it was safe, and that food could be found there. The worms’ memories were just as accurate as those worms who had never lost their heads.