22 December 2014

Modeling the worm!

Recent physical simulation of the c. elegans 302-neuron worm with Lego, as reported here at the I Programmer site on Nov. 16, 2014, "A Worm's Mind In A Lego Body" by Lucy Black.  This is a nice follow-up to my May 2013 post Modeling Simple Worms.  The claim here is that the computer model of these neurons is able to produce simple physical behavior that is like the worm behavior (note that the worm is very small and only capable of simple behavior).  There's a video that shows the lego model in action.
It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward.

The key point is that there was no programming or learning involved to create the behaviors. The connectome of the worm was mapped and implemented as a software system and the behaviors emerge.

The connectome may only consist of 302 neurons but it is self-stimulating and it is difficult to understand how it works - but it does.
As for the claim about the worm's mind...  well we don't know much about a worm's mind, so how could we know if this model captures it?

The simulation project is run by Tim Busbice at The Connectome Engine.  Another story on the simulation at the New Scientist site "First digital animal will be perfect copy of real worm."

20 December 2014

Some AI items - what are the limits?

Rodney Brooks pushes back: "artificial intelligence is a tool, not a threat" (Nov 10 at the Rethink Robotics blog) - here's a piece:
Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data.  This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine.  But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.
Gary Marcus has also been writing on the topic, such as this piece from October 24 at the New Yorker: "Why We Should Think About the Threat of Artificial Intelligence":
Barrat's core argument, which he borrows from the A.I. researcher Steve Omohundro, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence. In Omohundro's words, "if it is smart enough, a robot that is designed to play chess might also want to be build a spaceship," in order to obtain more resources for whatever goals it might have.
Marcus chats with Russ Robert on his Econtalk podcast posted Dec. 15.