tag:blogger.com,1999:blog-24138862332714172092024-03-08T12:31:46.653-08:00Mind in MindA blog for reports and musings on the mind, and the possibilities for honing the mind's faculties. Interested in, yet skeptical of, the approach of neuroscience to understanding the mind. Subjects of interest include learning techniques and mind/body interactions. Run by Curt Gardner, Portland OR.Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.comBlogger105125tag:blogger.com,1999:blog-2413886233271417209.post-61833802420366467472023-12-20T14:00:00.000-08:002023-12-20T14:00:33.990-08:00Honest Placebos<p>I've come across a few things referencing placebos lately, in particular 'transparent' or 'open' placebos where the fact that it contains no known effective ingredient is not hidden.</p><p>One is a link to this research on <a href="https://www.nature.com/articles/s41598-021-83148-6#:~:text=Open%2Dlabel%20placebos%20" target="_blank">"Effects of open-label placebos in clinical trials: a systematic review and meta-analysis"</a> from <i>Nature </i>dated Feb 16, 2021:</p><p></p><blockquote><p>Open-label placebos (OLPs) are placebos without deception in the sense that patients know that they are receiving a placebo. The objective of our study is to systematically review and analyze the effect of OLPs in comparison to no treatment in clinical trials.</p><p>We found a significant overall effect (standardized mean difference = 0.72, 95% Cl 0.39–1.05, p < 0.0001, I2 = 76%) of OLP. Thus, OLPs appear to be a promising treatment in different conditions but the respective research is in its infancy.</p></blockquote><p>Then in perusing <a href="https://profiles.sussex.ac.uk/p493-andy-clark" target="_blank">Andy Clark</a>'s latest book <i><a href="https://mitpressbookstore.mit.edu/book/9781524748456" target="_blank">The Experience Machine</a></i>, which posits the brain as a prediction engine, constantly engaging with sensory input both consciously and unconsciously to enable action, he concludes with some material about what he refers to as 'honest' placebos:</p><blockquote><p>Honest placebos appear to work by activating subterranean expectations through superficial indicators of reliability and efficacy such as good packaging and professional presentation (foil and blister packs, familiar fond, size and uniformity of the pills, and so on). This is because - as we have seen - the bulk of the brain's prediction empire is nonconscious.</p></blockquote><p>Clark reviews a number of other findings in his 'Hacking the Prediction Machine' chapter, and in a sense concludes:</p><blockquote><p>In the end, it looks like anything that can be done to increase our confidence in an intervention, procedure, or outcome is likely to have real benefits. </p></blockquote><p>He also describes use of certain psychedelic drugs as having the potential to 'reset' the prediction machine in very useful ways.</p><p></p>Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-34547315590419587842023-12-18T15:50:00.000-08:002023-12-18T15:50:42.188-08:00Conversing with a whale<p>This Dec. 12, 2023 report from the Seti Institute, <a href="https://www.seti.org/press-release/whale-seti-groundbreaking-encounter-humpback-whales-reveals-potential-non-human-intelligence" target="_blank">Whale-SETI: Groundbreaking Encounter with Humpback Whales Reveals Potential for Non-Human Intelligence Communication</a> seems encouraging.</p><p></p><blockquote>In response to a recorded humpback ‘contact’ call played into the sea via an underwater speaker, a humpback whale named Twain approached and circled the team’s boat, while responding in a conversational style to the whale ‘greeting signal.’ During the 20-minute exchange, Twain responded to each playback call and matched the interval variations between each signal.</blockquote><p></p><p>I've long thought it would make sense to attempt communication with the intelligent species on our own planet! </p>Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-18016326755476115002023-11-19T16:05:00.000-08:002023-11-19T16:05:46.773-08:00Evolution and Free Will<p>Pulled from the blog list, the recent Brain Science podcast with Kevin Mitchell is worthwhile.</p><p>As with his new book, it's titled <a href="https://brainsciencepodcast.com/bsp/2023/213-kevin-mitchell" target="_blank">"Free Agents: How Evolution Gave Us Free Will"</a> and was posted Oct 27, 2023.</p>Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-46954643035434941392023-05-02T15:20:00.000-07:002023-05-02T15:20:07.270-07:00AI reads the brain?<p>Well, long time no posting!</p><p>The post <a href="https://boingboing.net/2023/05/01/a-i-trained-to-read-minds-and-translate-private-thought-to-text-via-brain-scans.html" target="_blank">"A.I. trained to read minds and translate private thought to text via brain scans"</a> from BoingBoing caught my eye. Here with extensive training on a specific person's brain activity while listening to spoken text, is able to correlate later brain activity (while watching silent films or thinking of speaking) and do pretty well at reconstructing at least some of what the person "had in mind". Note though that patterns for one person do not carry over to other people.</p><blockquote style="border: none; margin: 0 0 0 40px; padding: 0px;"><p style="text-align: left;">This language-decoding method had limitations, Dr. Huth and his colleagues noted. For one, fMRI scanners are bulky and expensive. Moreover, training the model is a long, tedious process, and to be effective it must be done on individuals. When the researchers tried to use a decoder trained on one person to read the brain activity of another, it failed, suggesting that every brain has unique ways of representing meaning.</p></blockquote>Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-23440560654550487802018-01-14T20:40:00.004-08:002018-01-14T20:40:56.868-08:00Worms re-grow brains with old memories?<div class="tr_bq">
How much do we really know about memory storage? This story from National Geographic may make you think again: <a href="https://blog.nationalgeographic.org/2013/07/16/decapitated-worms-regrow-heads-keep-old-memories/" target="_blank">"Decapitated Worms Re-Grow Heads, Keep Old Memories"</a> by Carrie Arnold (dated July 16, 2013).</div>
<blockquote>
After the team verified that the worms had memorized where to find food, they chopped off the worms’ heads and let them regrow, which took two weeks. </blockquote>
<blockquote>
Then the team showed the worms with the regrown heads where to find food, essentially a refresher course of their light training before decapitation. </blockquote>
<blockquote>
Subsequent experiments showed that the worms remembered where the light spot was, that it was safe, and that food could be found there. The worms’ memories were just as accurate as those worms who had never lost their heads.</blockquote>
Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-21837918970060149552016-05-16T20:33:00.001-07:002016-05-16T20:33:15.610-07:00What do we really know about Matter? Two recent articles hit a similar theme, pushing the notion that our experience (consciousness) is in some sense on firmer ground that our understanding of physical matter.<br />
<br />
The first I came across today via Twitter: <a href="http://nyti.ms/27qH8gd" target="_blank">"Consciousness isn't a Mystery: It's Matter"</a> by philosopher Galen Strawson in the New York Times (May 16, 2016). Here's the gist:<br />
<blockquote>
... we know exactly what consciousness is — where by “consciousness” I mean what most people mean in this debate: experience of any kind whatever. It’s the most familiar thing there is, whether it’s experience of emotion, pain, understanding what someone is saying, seeing, hearing, touching, tasting or feeling. It is in fact the only thing in the universe whose ultimate intrinsic nature we can claim to know. It is utterly unmysterious.
<br />
<br />
The nature of physical stuff, by contrast, is deeply mysterious, and physics grows stranger by the hour. (Richard Feynman’s remark about quantum theory — “I think I can safely say that nobody understands quantum mechanics” — seems as true as ever.) Or rather, more carefully: The nature of physical stuff is mysterious <em>except insofar as consciousness is itself a form of physical stuff</em>. </blockquote>
I think this is on the right track... emphasizing the primacy of experience, but not claiming that experience is necessarily exposing the actual nature of 'physical stuff'. It's easy to assume we have a good handle on Matter, when in fact we've only discovered some rules about it, along with the working assumption that whatever it is, if you get a complex enough organization you get what we think of as conscious experience.<br />
<br />
Back in April, Amanda Gefter wrote on and interviewed cognitive scientist (and author of Visual Intelligence) <a href="http://www.cogsci.uci.edu/~ddhoff/" target="_blank">Donald Hoffman</a> in an article entitled <a href="http://www.theatlantic.com/science/archive/2016/04/the-illusion-of-reality/479559/" target="_blank">"The Case Against Reality"</a> in The Atlantic. Hoffman argues that our evolutionary path driven by fitness means that we have no reliable means of accessing what's really out there.<br />
<blockquote>
The idea that what we’re doing is measuring publicly accessible objects, the idea that objectivity results from the fact that you and I can measure the same object in the exact same situation and get the same results — it’s very clear from quantum mechanics that that idea has to go. Physics tells us that there are no public physical objects. So what’s going on? Here’s how I think about it. I can talk to you about my headache and believe that I am communicating effectively with you, because you’ve had your own headaches. The same thing is true as apples and the moon and the sun and the universe. Just like you have your own headache, you have your own moon. But I assume it’s relevantly similar to mine. That’s an assumption that could be false, but that’s the source of my communication, and that’s the best we can do in terms of public physical objects and objective science.</blockquote>
Presumably since all humans are on the same evolutionary path, we do indeed have similar assumptions and experiences. But it may be much harder to communicate with beings coming from different evolutionary pressures.Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-61880186004546208112015-10-21T21:21:00.001-07:002015-10-21T21:21:04.679-07:00More news on the worm C. elegans - a few more neurons? Been awhile since I've been active here, but this exciting worm news certainly rates a post. To get caught up, check this previous post: <a href="http://mind-in-mind.blogspot.com/2014/12/modeling-worm.html" target="_blank">Modeling the Worm!</a><br />
<br />
In this story from Nature, <a href="http://www.nature.com/news/surprise-mystery-neurons-found-in-male-worms-1.18558" target="_blank">"Surprise 'mystery' neurons found in male worms"</a> the title gives it away.<br />
<blockquote>
The neurons help the worms learn when to prioritize mating over eating, revealing how a seemingly simple brain can be capable of a complex learned behaviour — and one that differs between the sexes.<br />
<i><br />Caenorhabditis</i> <i>elegans</i> worms are <a href="http://www.nature.com/news/neuroscience-as-the-worm-turns-1.12461" style="color: #5c7996; text-decoration: none;">the model animal of choice for many neuroscientists</a>, because their neural circuits are so simple that they can be <a href="http://www.nature.com/news/video-reveals-entire-organism-s-neurons-at-work-1.15240" style="color: #5c7996; text-decoration: none;">mapped in full</a>. They have two sexes: hermaphrodite and male. Hermaphrodites, the best studied, have just 302 neurons, but males have more — the MCMs raise their total to 385 neurons.</blockquote>
So it looks like there's more work to be done to get a good handle on the worm brain.
Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-35367845761336296892014-12-22T17:21:00.002-08:002014-12-22T17:21:29.271-08:00Modeling the worm! <div class="tr_bq">
Recent physical simulation of the c. elegans 302-neuron worm with Lego, as reported here at the I Programmer site on Nov. 16, 2014, <a href="http://www.i-programmer.info/news/105-artificial-intelligence/7985-a-worms-mind-in-a-lego-body.html" target="_blank">"A Worm's Mind In A Lego Body"</a> by Lucy Black. This is a nice follow-up to my May 2013 post <a href="http://mind-in-mind.blogspot.com/2013/05/modeling-simple-worms.html" target="_blank">Modeling Simple Worms</a>. The claim here is that the computer model of these neurons is able to produce simple physical behavior that is like the worm behavior (note that the worm is very small and only capable of simple behavior). There's a video that shows the lego model in action.</div>
<blockquote>
It is claimed that the robot behaved in ways that are similar to observed <i>C. elegans</i>. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward.<br />
<br />
The key point is that there was no programming or learning involved to create the behaviors. The connectome of the worm was mapped and implemented as a software system and the behaviors emerge.<br />
<br />
The connectome may only consist of 302 neurons but it is self-stimulating and it is difficult to understand how it works - but it does.</blockquote>
As for the claim about the worm's mind... well we don't know much about a worm's mind, so how could we know if this model captures it?<br />
<br />
The simulation project is run by Tim Busbice at <a href="http://www.connectomeengine.com/" target="_blank">The Connectome Engine</a>. Another story on the simulation at the New Scientist site <a href="http://www.newscientist.com/article/mg22429972.300-first-digital-animal-will-be-perfect-copy-of-real-worm.html" target="_blank">"First digital animal will be perfect copy of real worm."</a>Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-31141565759467325992014-12-20T13:31:00.001-08:002014-12-20T13:31:35.506-08:00Some AI items - what are the limits? Rodney Brooks pushes back: <a href="http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/" target="_blank">"artificial intelligence is a tool, not a threat"</a> (Nov 10 at the Rethink Robotics blog) - here's a piece:<br />
<blockquote class="tr_bq">
Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data. This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine. But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.</blockquote>
Gary Marcus has also been writing on the topic, such as this piece from October 24 at the New Yorker: <a href="http://www.newyorker.com/tech/elements/why-we-should-think-about-the-threat-of-artificial-intelligence" target="_blank">"Why We Should Think About the Threat of Artificial Intelligence"</a>:<br />
<blockquote class="tr_bq">
Barrat's core argument, which he borrows from the A.I. researcher <a href="http://steveomohundro.com/" target="_blank">Steve Omohundro</a>, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence. In Omohundro's words, "if it is smart enough, a robot that is designed to play chess might also want to be build a spaceship," in order to obtain more resources for whatever goals it might have.</blockquote>
Marcus chats with Russ Robert on his <a href="http://www.econtalk.org/archives/2014/12/gary_marcus_on.html" target="_blank">Econtalk podcast</a> posted Dec. 15.Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-51468848347579432322014-10-25T13:15:00.001-07:002014-10-25T13:15:48.013-07:00What can the mind-set do to the body? That's the question at issue in the recent NYT article by Bruce Grierson <a href="http://nyti.ms/1yXyeqR" target="_blank">"What if Age Is Nothing but a Mind-Set?"</a> covering the work of Harvard psychologist Ellen Langer. Langer has done various researches where people are "primed" with positive information about their situation - that they have control, are responsible, etc. and seen frequent improvements.<br />
<br />
The title study done in 1981 involved bringing a group of men in their 70s into a controlled environment simulating 1959 for 5 days, and then evaluated them on various measures. In various ways they appeared to be "younger" afterwards - in manual dexterity, in sitting taller. Other previous studies had led her to this way of thinking about priming:<br />
<blockquote>
To Langer, this was evidence that the biomedical model of the day — that the mind and the body are on separate tracks — was wrongheaded. The belief was that “the only way to get sick is through the introduction of a pathogen, and the only way to get well is to get rid of it,” she said, when we met at her office in Cambridge in December. She came to think that what people needed to heal themselves was a psychological “prime” — something that triggered the body to take curative measures all by itself.</blockquote>
If we believe the mind to be the result of a physical process, then I don't see it as too far-fetched to believe that different mind-sets can manifest in different physical outcomes. This is of course related to placebos generally:<br />
<blockquote>
Langer came to believe that one way to enhance well-being was to use all sorts of placebos. Placebos aren’t just sugar pills disguised as medicine, though that’s the literal definition; they are any intervention, benign but believed by the recipient to be potent, that produces measurable physiological changes. Placebo effects are a striking phenomenon and still not all that well understood. Entire fields like psychoneuroimmunology and psychoendocrinology have emerged to investigate the relationship between psychological and physiological processes. Neuroscientists are charting what’s going on in the brain when expectations alone reduce pain or relieve Parkinson’s symptoms. More traditionally minded health researchers acknowledge the role of placebo effects and account for them in their experiments. But Langer goes well beyond that. She thinks they’re huge — so huge that in many cases they may actually be the main factor producing the results.</blockquote>
Now Langer is taking the research to an extreme - setting up a positive situation for women with stage 4 breast cancer, which the medical establishment essentially has no answers for. While it's hard to believe that this will work, it still seems to me to be an avenue worth pursuing.Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-41247212542021125742014-10-12T11:11:00.000-07:002014-10-12T11:11:45.544-07:00Poor title edition: "Are We Really Conscious?" in NYTArticle is by Michael Graziano from Princeton, <a href="http://nyti.ms/1suGuLt" target="_blank">"Are We Really Conscious?"</a> posted Oct. 10, 2014. I think the better headline would be "What Are We Really Conscious Of?" (I recognize that the authors usually don't pen the headlines). In any case, Graziano has a theory about our awareness being a distortion of the reality, and that basic point is not a new one. But there is one line which I object to, and it seems like Graziano himself contradicts it in the article.<br />
<br />
Midway through, he writes: "But the argument here is that there is no subjective impression; there is only information in a data-processing device." ('Device' here referring to a brain). This is the classic reductionist move - it's <b>only</b> data processing. There's a non-recognition of the potential for levels of complexity and organization that give rise to interesting phenomena in their own right. The reduction of a brain, which is a really interesting thing, not yet well understood, to a "device".
<br />
<br />
Further in the paragraph: "The brain’s cognitive machinery accesses that interlinked information and derives several conclusions: There is a self, a me; there is a red thing nearby; there is such a thing as subjective experience; and I have an experience of that red thing." Now this is at least a little more open to interesting investigation - what might we mean by self? what is a subjective experience?
<br />
<br />
My conclusion these days is that a coherent view of 'self' is at the very least the entire organism (i.e. my whole body), and probably it needs to go beyond that, to extend some way into the environment. So does my self have experiences - yes, I think it certainly does. My body (which obviously includes my brain) reacts to experiences. Experiences appear to have an information processing aspect or basis, and that's very interesting, but to end there is missing at least half the story (IMHO).
<br />
<br />
By the end, Graziano writes: "In this theory, awareness is not an illusion. It’s a caricature. Something — attention — really does exist, and awareness is a distorted accounting of it."<br />
<br />
Ok, so which is it - do we have subjective experience or not? <br />
<br />
Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-73370642237876375752014-07-27T05:58:00.001-07:002014-07-27T05:58:38.480-07:00Vaughan Bell on Neuro Metaphors - The Observer. Bell sums up how the metaphors help and probably limit us. <a href="http://www.theguardian.com/science/2014/jul/26/photography-supercomputers-see-ourselves-in-our-inventions-brain-neuroscience" target="_blank">'From photography to supercomputers: how we see ourselves in our inventions'</a> by Vaughan Bell in The Observer, July 26, 2014.<br />
<blockquote class="tr_bq">
When computers arrived, we inevitably saw ourselves in our machines and the idea of the mind as an information processor became popular. Here, the mind is thought to consist of information processing networks where data is computed and transformed. One of the newest and most fashionable theories argues that the central function of the brain is to statistically predict new information. The idea is that the brain tries to minimise the errors it makes in its predictions by adjusting its expectations as it gets new information.</blockquote>
I've long thought about this same issue - that we overuse the metaphor when thinking about the brain. Rather than simply working with the idea that computers replicate certain functions of a brain, we start to believe that the brain must work like (or perhaps in a sense be) a computer. But it does give us an approach that can bear fruit, sometimes for a long time.Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-7534971501247311772014-03-01T21:47:00.002-08:002014-10-12T11:14:35.652-07:00'Touching a Nerve' by Patricia Churchland (2013): A personal, non-academic review of the life physical. In general I found <a href="http://books.wwnorton.com/books/Touching-a-Nerve/" target="_blank">'Touching a Nerve'</a> a good read - the subtitle 'The Self as Brain' is a kind of undertone to her (largely common-sensical) look at a variety of subjects, all based on a neuroscientific outlook. Here are a few points of special interest to me.<br />
<div>
<br /></div>
<div>
On free will - she comes out pretty strongly against the Harris line that "free will is an illusion" - she says if it means there's no "contra-causal free will" then the observation is "only marginally interesting."</div>
<blockquote class="tr_bq">
But what if <i>free will is illusory</i> means something else? What if it means, for example, that because there is a neural substrate for our deliberations and choices, we cannot have free will? Now I am totally at a loss. Why would anyone say such a thing? So what do they think <i>is</i> required to make genuine choices? A non-physical soul? Says who? (184)</blockquote>
<blockquote class="tr_bq">
What is <i>not</i> illusory is self-control, even though it can vary as a function of age, temperament, habits, sleep, disease, food and many other factors that affect how nervous systems function. (185) </blockquote>
Churchland gives credit to Freud for being an early adopter (circa 1895) of the view that the unconscious processes are both mental and physical.<br />
<blockquote class="tr_bq">
He understood that unconscious reasoning and intentions and thoughts need to be invoked to explain such things as complex perception (for example, heard speech as having a specific meaning) and complex motor acts (for example, speaking intelligibly and purposefully).</blockquote>
<blockquote class="tr_bq">
[...] </blockquote>
<blockquote class="tr_bq">
He realized that he had essentially no idea what a vocabulary spanning the brain and behavioral science would look like. His conclusion was that we have no choice but to make do with what we know is a flawed and misleading vocabulary, namely, that of intentions, reasons, beliefs, and so on, to describe unconscious states. (201)</blockquote>
I liked this bit, on the interplay of conscious and unconscious:<br />
<blockquote class="tr_bq">
Your conscious brain needs your unconscious brain, and vice versa. The character and features of your conscious life depend on your unconscious activities. And of course, conscious events can in turn have an effect on unconscious activities. (207) </blockquote>
And this bit on conscious decision-making as a constraint satisfaction process:<br />
<blockquote class="tr_bq">
Precisely what my dear old brain is doing as I go through these exercises is not entirely known. That is, we can think of it in terms of constraint satisfaction, but we are still a bit vague about what constraint satisfaction is in neural terms. Roughly speaking, we do know that in constraint satisfaction operations, the brain integrates skills, knowledge, memories, perceptions, and emotions and somehow, in a manner we do not precisely understand, comes to a single result. (219)</blockquote>
She disputes Dennett's position that language has to be part of consciousness, partly on personal grounds:<br />
<blockquote class="tr_bq">
A further problem is that consciousness - mine, anyhow - involves so much more than speech. Indeed, we may experience much for which we have no precise linguistic characterization at all, such as the difference between the smell of cinnamon and the smell of cloves or the difference between feeling energetic and feeling excited, or what an orgasm is like. (250)</blockquote>
While other mammals do not have our kind of language, they do seem to communicate, and in terms of brain structure, they have very similar organs and patterns of activity. She feels this indicates that many animals have some level of conscious awareness.<br />
<br />
The overall picture she draws is of the brain as a looping structure, with some highly networked neurons able to convey signals to many other areas, to support the type of integration that we see.<br />
<br />
I noticed that the book initially got a number of one-star reviews at Amazon, mostly short critiques of her overly reductionist viewpoint (an organized effort I presume!). I did not find her to be overly reductionist in this book. While she doesn't explicitly take on emergence as a topic, in the epilogue she does make this argument:<br />
<blockquote class="tr_bq">
If, as seems increasingly likely, dreaming, learning, remembering, and being consciously aware are activities of the physical brain, it does not follow that they are not real. Rather, the point is that their reality depends on a neural reality. If reductionism is essentially about explanation, the lament and the lashing out are missing the point. Nervous systems have many levels of organization, from molecules to the whole brain, and research on all levels contributes to our wider and deeper understanding. (262)</blockquote>
Accessible, personal, and a good overview - I recommend it.<br />
<br />
<br />
<br />
<div>
<br /></div>
<div>
<div>
<br /></div>
<div>
<br /></div>
</div>
Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-71089107515438424752014-02-19T19:41:00.001-08:002014-02-19T19:43:37.473-08:00Dennett and Harris wrestle on 'Free Will'My take on the recent essays by Daniel Dennett and Sam Harris on the short Harris book 'Free Will' (2012, <a href="http://mind-in-mind.blogspot.com/2012/03/free-will-by-sam-harris.html" target="_blank">my original thoughts here</a>). Dennett wrote a 22 page review of the book in late January, and Harris put out his reply about a week later. These are my brief notes on the exchange.<br />
<br />
<b><a href="http://www.samharris.org/blog/item/reflections-on-free-will" target="_blank">Dennett's main claims</a>:</b><br />
<br />
1. Harris is fighting a strawman - everyone basically agrees that the folk free will (i.e. libertarian free will) is wrong. Dennett makes case for compatabilist notions (without claiming determinism is settled matter - says it is up to science to decide).<br />
<br />
2. Harris is not taking the compatibilist position seriously and/or not in informed way. Dennett does certainly appeal to the 'vast library' in a condescending way.<br />
<br />
3. Harris seems fixed on one point in time, not dealing with dynamics over time. Dennett rejects the Exact Replay scenario (rewind the clock, put literally everything back in place - see also #4 below in Harris section). I liked this passage:<br />
<blockquote class="tr_bq">
Harris ignores the reflexive, repetitive nature of thinking. My choice at time t can influence my choice at time t’ which can influence my choice at time t”. How? My choice at t can have among its effects the biasing of settings in my brain (which I cannot directly inspect) that determine (I use the term deliberately) my choice at t’. I can influence my choice at t’. I influenced it at time t (without “inspecting” it). Like many before him, Harris shrinks the me to a dimensionless point, “the witness” who is stuck in the Cartesian Theater awaiting the decisions made elsewhere. That is simply a bad theory of consciousness.</blockquote>
4. Claims Harris is inconsistent about whether we can 'grab hold of our puppet strings' - perhaps also about how influence can work.<br />
<br />
5. Dennett acknowledges we can't be 'ultimate cause' - infinite regress issue. But over time we can "influence ourselves" (and others) in meaningful ways.<br />
<br />
6. Takes issue with what Dennett claims is an evasion of responsibility. I think this is the main area where probably the two men are actually probably not far apart practically, but are accusing each other of allowing for bad results. Key lines:<br />
<blockquote class="tr_bq">
Harris should take more seriously the various tensions he sets up in this passage. It is wise to hold people responsible, he says, even though they are not responsible, not <i>really</i>. But we don’t hold everybody responsible; as he notes, we excuse those who are unresponsive to demands, or in whom change is impossible. That’s an important difference, and it is based on the different abilities or competences that people have. Some people (are determined to) have the abilities that justify our holding them responsible, and some people (are determined to) lack those abilities. But determinism doesn’t do any work here; in particular it doesn’t disqualify those we hold responsible from occupying that role. In other words, real responsibility, the kind the everyday folk think they have (if Harris is right), is strictly impossible; but when those same folk wisely and justifiably hold somebody responsible, that isn’t real responsibility!</blockquote>
Overall: I actually found Dennett fairly straightforward, somewhat condescending, and probably drawing some unwarranted conclusions about the Harris position. I found his points to be pretty interesting and worthy of consideration.<br />
<br />
<b><a href="http://www.samharris.org/blog/item/the-marionettes-lament" target="_blank">Harris main claims</a>:</b><br />
<br />
1. Says Dennett misunderstands his arguments. Claims that libertarian free will is quite widely held still.<br />
<br />
2. Harris is fully focused on taking down libertarian free will. He feels that removing that illusion will remove any rational reason for hatred - but leaves in place reasons for removing dangerous folks from society. Key lines:<br />
<blockquote class="tr_bq">
And accepting incompatibilism has important intellectual and moral consequences that you ignore—the most important being, in my view, that it renders hatred patently irrational (while leaving love unscathed). If one is concerned about the consequences of maintaining a philosophical position, as I know you are, helping to close the door on human hatred seems far more beneficial than merely tinkering with a popular illusion.</blockquote>
3. Harris says he's not fully convinced of determinism, but thinks it must be nearly true.<br />
<br />
4. There's a weird line about indeterminism re: the putt replay. Harris writes:<br />
<blockquote class="tr_bq">
That is, whatever his ability as a golfer, Austin would miss that same putt a trillion times in a row—provided that every atom and charge in the universe was exactly as it had been the first time he missed it. You think this fact (we can call it determinism, as you do, but it includes the contributions of indeterminism as well, provided they remain the same[3]) says nothing about free will. </blockquote>
This seems to me to indicate the 'pseudo-random case' here, not a truly random indeterminism. Not a big deal, but I found it odd to have the "provided they remain the same" qualifier.<br />
<br />
5. As Dennett argues, I think Harris does not really grapple with the compatibilist argument. Does not engage the ideas of the dynamic system changing (influencing its future direction) over time.<br />
Key lines: "In other words, your compatibilism seems an attempt to justify the conventional notion of blame, which my view denies. This is a difference worth focusing on."<br />
<br />
Overall: Actually seems a bit more whiney than Dennett. Says he wanted debate or conversation, not to trade essays. I think that Dennett mostly understands exactly where Harris stands. Harris has his reasons for not wanting to grapple with compatibilism, but personally I think he hasn't shown that he really has a grip on Dennett's points.Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-7716977043905259912013-11-25T12:30:00.002-08:002013-11-25T12:30:38.512-08:00How much neuroscience in 'Social'?Psychologist Matthew Lieberman does like the fMRI! In his new book 'Social' (2013) the UCLA professor and Director of the <a href="http://www.scn.ucla.edu/" target="_blank">Social Cognitive Neuroscience Lab</a> makes the case for the neural underpinnings of our social learning and behavior. The question that came to my mind though was how much of the message was basically social psychology (which is valuable, don't get me wrong, but not dependent on fMRI findings). <br />
<br />
The book features many diagrams of brains, pointing out various regions that are active during different cognitive tasks. In general the correlations of active areas to cognitive tasks can be very useful to better understand the brain structures, if not to actually understand how the cognitive tasks are achieved. Most illuminating are the findings where either the same area is used during different types of tasks, or where different areas are used for what seem to be very similar tasks. I think it's probably valuable to combine these types of findings with traditional psychology to see what may be illuminated.<br />
<br />
Lieberman's key claim is that our 'default' brain mode is used for so-called 'mentalizing' - sorting through the social world, trying to understand other people's motives and intentions. This is shown by the activation of certain brain areas both while explicitly thinking about social problems and when not attempting to do other cognitive tasks. <br />
<br />
We typically use a particular prefrontal brain region for general cognition (reading, memorizing, computing, etc.), and it was thought that these areas were the critical to all learning. But various studies have found a 'social encoding advantage' in learning using the mentalizing system to form overall impressions of people and their intentions rather than simple memorization of people's behavior. The finding was that 'the folks making sense of the information socially have done better on memory tests than the folks intentionally memorizing the material.' (284) From the neuroscience angle:<br />
<blockquote class="tr_bq">
Jason Mitchell, a social neuroscientist at Harvard University, ran an fMRI version of the social encoding advantage study. As in a dozen studies before his, he found that when people were asked to memorize the information, activity in the lateral prefrontal cortex and the medial temporal lobe predicted successful remembering of that information later on. According to the standard explanation of the social encoding advantage, the same pattern should have been present or event enhanced when people did the social encoding task, but that isn't what happened. The traditional learning network wasn't sensitive to effective social encoding. Instead the central node of the mentalizing network, the dorsomedial prefrontal cortex, was associated with successful learning during social encoding. (284-5)</blockquote>
Lieberman suggests a number of interesting applications of this finding to change and hopefully improve the way we teach kids, who are intensely interested in the social world and not so interested in memorizing facts - such as by teaching history more in terms of the social dramas (rather than actions and dates), and math by engaging students as both tutors and tutees. <br />
<br />
The book has sections on three stages of social development, which he terms connection, mindreading (theory of mind), and harmonizing - and argues that significant brain resources are devoted to maintaining connection with other people. Harmonizing is about taking on many of the goals and behaviors of our social group (particularly active during adolescence). The idea here is that our sense of self as supported in the brain is very susceptible to the social messages we receive.<br />
<br />
Overall I liked this book - not that it really lives up to the subtitle 'Why Our Brains Are Wired to Connect' - it's more about 'How' than 'Why'. At its best it reminds us that we are truly social creatures, and the neuroscience helps illustrate that point.Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-55608434734483118602013-11-25T11:38:00.000-08:002013-11-25T11:38:08.186-08:00Will we understand science in the future?<a href="http://tylercowen.com/" target="_blank">Tyler Cowen</a> suggests not in his book 'Average Is Over' (2013). The book is a bit of prognostication about the near future, looking mainly at how the use of computers is and will change our world. The basic idea is that the people who can add value to computer work in some way will reap most of the rewards.<br />
<br />
For the purposes of this blog, I thought the part about computer-driven science was most interesting. Cowen lists three reasons why science may become harder to understand:<br />
<blockquote class="tr_bq">
1. In some (not all) scientific areas, problems are becoming more complex and unsusceptible to simple, intuitive, big breakthroughs.<br />
2. The individual scientific contribution is becoming more specialized, a trend that has been running for centuries and is unlikely to stop.<br />
3. One day soon, intelligent machines will become formidable researchers in their own right. (206)</blockquote>
And here's one attempt at a summary:<br />
<blockquote class="tr_bq">
The remaining human knowledge of science will be very practical, very prediction-oriented, and well geared for improving our lives. Of course those are all positive developments. Still, as a general worldview, science will not always be very inspiring or illuminating. The general educated public will to some extent be shut out from a scientific understanding of the world, and we will run the risk that they might detach from a long-term loyalty to scientific reasoning. (219)</blockquote>
It will be interesting to see how much of this thinking will apply to neuroscience.Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-63856035518712788982013-10-23T13:15:00.001-07:002013-10-23T13:15:16.154-07:00Brain decoding - how far can it go?Kerri Smith has a good overview of the topic in <a href="http://www.nature.com/news/brain-decoding-reading-minds-1.13989" target="_blank">"Brain decoding: Reading minds"</a> at Nature. The range of investigation goes from identifying the content of dreams to verifying whether someone is lying, to trying to understand the full process of how the brain can encode information. But the starting point is fairly modest - trying to identify what object someone is looking at based on patterns in the visual area of the brain. There's a good reason to start there:<br />
<blockquote>
Applying their techniques beyond the encoding of pictures and movies will require a vast leap in complexity. "I don't do vision because it's the most interesting part of the brain," says Gallant. "I do it because it's the easiest part of the brain. It's the part of the brain I have a hope of solving before I'm dead." But in theory, he says, "you can do basically anything with this."</blockquote>
But of course theory and practice are two different things, and there may be practical limits:<br />
<blockquote>
Devising a decoding model that can generalize across brains, and even for the same brain across time, is a complex problem. Decoders are generally built on individual brains, unless they're computing something relatively simple such as a binary choice — whether someone was looking at picture A or B. But several groups are now working on building one-size-fits-all models. "Everyone's brain is a little bit different," says Haxby, who is leading one such effort. At the moment, he says, "you just can't line up these patterns of activity well enough."</blockquote>
Using this kind of research to detect 'secret' product preferences seems pretty misguided to me. But that doesn't stop some from trying!Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-10227223940270257262013-10-01T17:34:00.000-07:002013-10-01T17:34:01.871-07:00Decide what you think - it matters!Tom Stafford at mindhacks.com <a href="http://mindhacks.com/2013/09/29/the-effect-of-diminished-belief-in-free-will/" target="_blank">writes on free will studies</a> that indicate some interesting side effects of reading about a deterministic model. Here's the bottom line:<br />
<blockquote>
This is a young research area. We still need to check that individual results hold up, but taken all together these studies show that our belief in free will isn’t just a philosophical abstraction. We are less likely to behave ethically and kindly if our belief in free will is diminished.
</blockquote>
Personally I do think that regardless of the exact underlying physical mechanisms, one's choices help set the pattern for future behaviors, so best to act carefully and with fore-thought!Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-61533853846459211952013-09-24T11:07:00.003-07:002013-09-24T11:07:54.445-07:00Follow-up on the brain-to-brain experiments<div class="tr_bq">
<a href="http://mindhacks.com/" target="_blank">Mind Hacks</a> blog presents <a href="http://mindhacks.com/2013/09/24/it-is-mind-control-but-not-as-we-know-it/" target="_blank">a nice short analysis of the UW experiment</a> ("It is mind control but not as we know it"), written by Tom Stafford. Previously I logged an entry for the <a href="http://mind-in-mind.blogspot.com/2013/08/human-to-human-brain-communication.html" target="_blank">brain-to-brain communication</a> experiment conducted at University of Washington by Rajesh Rao. Here's the gist from Stafford:</div>
<blockquote>
In information terms, this is close to as simple as it gets. Even producing a signal which said what to fire at, as well as when to fire, would be a step change in complexity and wasn’t attempted by the group. TMS is a pretty crude device. Even if the signal the device received was more complex, it wouldn’t be able to make you perform complex, fluid movements, such as those required to track a moving object, tie your shoelaces or pluck a guitar. But this is a real example of brain to brain communication.
<br />
<br />
As the field develops the thing to watch is not whether this kind of communication can be done (we would have predicted it could be), but exactly how much information is contained in the communication.</blockquote>
Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-87916429928839810932013-08-27T15:05:00.002-07:002013-08-27T15:05:19.584-07:00Human-to-human brain communicationA very limited form of brain-to-brain communication is described in a story on research at the University of Washington: <a href="http://www.washington.edu/news/2013/08/27/researcher-controls-colleagues-motions-in-1st-human-brain-to-brain-interface/" target="_blank">"Researcher controls colleague’s motions in 1st human brain-to-brain interface"</a> by Doree Armstrong and Michelle Ma, Aug 27, 2013. The experiment used EEG signals via Skype to transmit signals of thoughts of simple movement, which the receiver got via transcranial magnetic stimulation - "a noninvasive way of delivering stimulation to the brain to elicit a response.... in this case, it was placed directly over the brain region that controls a person’s right hand."
<br />
<br />
I believe there are quite severe limits to the type of signal which could actually be transmitted and received via this mechanism, and the researchers confirm:<br />
<br />
<blockquote>
At first blush, this breakthrough brings to mind all kinds of science fiction scenarios. Stocco jokingly referred to it as a “Vulcan mind meld.” But Rao cautioned this technology only reads certain kinds of simple brain signals, not a person’s thoughts. And it doesn’t give anyone the ability to control your actions against your will.<br />
<br />
Both researchers were in the lab wearing highly specialized equipment and under ideal conditions. They also had to obtain and follow a stringent set of international human-subject testing rules to conduct the demonstration.<br />
<br />
“I think some people will be unnerved by this because they will overestimate the technology,” Prat said. “There’s no possible way the technology that we have could be used on a person unknowingly or without their willing participation.”</blockquote>
Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-46248535590042933802013-06-21T12:38:00.002-07:002013-06-21T12:38:25.727-07:00Thin slicing the brain.Creates a whole lot of data! Nature reports on <a href="http://www.nature.com/news/whole-human-brain-mapped-in-3d-1.13245" target="_blank">'Whole human brain mapped in 3D'</a> by Helen Shen, June 20, 2013. The atlas was created from 7400 slices of a human brain, each thinner than a human hair, and nicknamed 'BigBrain'. Here's the quick summary:<br />
<blockquote>
The brain is comprised of a heterogeneous network of neurons of different sizes and with shapes that vary from triangular to round, packed more or less tightly in different areas. BigBrain reveals variations in neuronal distribution in the layers of the cerebral cortex and across brain regions — differences that are thought to relate to distinct functional units.</blockquote>
Given that we are still <a href="http://mind-in-mind.blogspot.com/2013/05/modeling-simple-worms.html" target="_blank">working on a model for a simple worm with 302 neurons</a>, there's obviously a long way to go with the full human brain. But you gotta start somewhere, and I'm sure that having an accurate map will help (now just drawn from one example, but as they do more they will get an idea of the individual differences that are possible - I'll bet they can be pretty significant).Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-6619689325313798712013-06-18T12:26:00.000-07:002014-10-12T11:15:38.804-07:00What's the program in the Chinese Room?It's really big and complicated! That's my main takeaway from the Dennett writing on the Searle thought experiment (in <i>Intuition Pumps</i> and other books).<br />
<br />
Here's the description of the scenario from <a href="http://en.wikipedia.org/wiki/Chinese_room" target="_blank">Wikipedia</a>:<br />
<blockquote class="tr_bq">
It supposes that there is a program that gives a computer the ability to carry on an intelligent conversation in written Chinese. If the program is given to someone who speaks only English to execute the instructions of the program by hand, then in theory, the English speaker would also be able to carry on a conversation in written Chinese. However, the English speaker would not be able to understand the conversation. Similarly, Searle concludes, a computer executing the program would not understand the conversation either.</blockquote>
So - what might this program consist of? Obviously there is no simple algorithm for taking in a string of Chinese characters one by one, and sending out a meaningful response character by character. It would need all sorts of features, such as memory of the current conversation (to provide context to any given input), ability to distinguish questions from comments from opinions, and so much more. Of course any such program could never be carried out in a step by step manual way by a person, unless you are willing to wait days if not months or years for responses! <br />
<br />
If we simply assume that such a program exists and works as described, then it does seem to me that the outsider interacting with the room would grant a level of understanding to it. The Watson program that can play Jeopardy seems to be getting relatively close to this level of sophistication, although it was built for the Answer/Question format only.Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-33992805649998194282013-06-10T11:59:00.001-07:002013-06-10T12:01:03.572-07:00What is a zombie? Perhaps a whole lot more than you thought... thinking on Dennett and the zombie concept, mostly drawn from <a href="http://www.guardian.co.uk/books/2013/may/15/intuition-pumps-tools-dennett-review" target="_blank">Intuition Pumps</a> (2013).<br />
<br />
The philosophical concept of the zombie seems to start from a fairly simple definition: "a hypothetical being that is indistinguishable from a normal human being except in that it lacks conscious experience, qualia, or sentience." (drawn from <a href="http://en.wikipedia.org/wiki/Philosophical_zombie" target="_blank">Wikipedia</a>)<br />
<br />
Dennett's important point is that given this definition, it must be true that this zombie has very complex functional abilities - it must have functionality to support all sorts of things that normal humans can do - such as visual-auditory-olfactory-touch-taste sensory input, memory (perhaps slightly faulty), color recognition, facial recognition & linking face to name, ability to know that recognition of friends & family should trigger different behavior than recognition, say, of a politician, and so much more. While by definition it does not have conscious experience, it's hard to say how one could ever confirm that this was the case. <br />
<br />
Dennett then goes on to examine what he calls a subset of zombies, those which have "equipment that permits it to monitor its own activities, both internal and external, so it has internal (nonconscious) higher-order informational states that are about its other internal states." (p. 290). He calls these 'zimboes' but it's unclear to me whether such equipment is actually necessary in all zombies in order to produce the definitional behavior of being indistinguishable. Dennett claims that only a zimbo can "hold its own in everyday interactions" - and that's my sense as well - to be indistinguishable from a normal human. So I guess I'm unsure of why Dennett creates this new category of zimbo, if the zimbo has equipment that all zombies must have.<br />
<br />
At a later point, Dennett examines a couple cases of non-normal human pathologies around facial recognition. Prosopagnosics are people who do not recognize people's faces, and people with Capgras delusion who can recognize people but believe they are 'imposters' - not truly the person they resemble. Research on brain function seems to indicate that there are at least two mechanisms at work in normal facial recognition - there's unconscious visual processing going on that also ties into emotional recognition, and there's conscious recognition of 'knowing who it is'. If you can show that the unconscious mechanisms are broken (as apparently in the case of Capgras), leading to an altered conscious experience (sense of an imposter), then it appears the qualia is quite tightly tied to the unconscious mechanisms (but by definition qualia is supposed to be the conscious bit).<br />
<br />
In other words, it's hard to draw a neat line around qualia when you look closely. Again pointing out the difficulty of truly imagining the zombie.<br />
<br />
I would argue that regardless of the brain mechanisms in use (and I do agree that many modules or mechanisms are used), the subjective experience as a whole is the emergent phenomenon of interest, and the prosopagnosic indeed has a different subjective experience than normal people, as does the Capgras subject. It is a fact that the Capgras subject is deluded about reality, but that fact doesn't alter the subjective experience of seeing people as impostors.<br />
<br />Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-71871311685648343202013-06-07T13:09:00.002-07:002013-06-07T13:10:20.616-07:00Dennett and reports on consciousness...<div style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">
I've been reading some <a href="http://ase.tufts.edu/cogstud/incbios/dennettd/dennettd.htm" target="_blank">Daniel Dennett</a> lately (both <i>Consciousness Explained</i> and his new <i>Intuition Pumps</i>) and reflecting on many of his concepts. Dennett argues for what he calls hetero-phenomenology as a method of scientifically researching consciousness, and this is basically taking reports from subjects in as neutral a way as possible (i.e. minimizing assumptions), and then trying to evaluate and explain these reports (i.e. are they right, what causes them, etc.). In at least some descriptions, he seems to see the goal as simply a binary true/false evaluation - presumably on whether the perception matches what's really evident (as judged by objective observers).</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">
Rather than a simple binary evaluation of right/wrong, I propose there are multiple angles that can and should be examined.<br />
1. If the report includes descriptions of the world outside the subject, how do these compare to reports of 3rd parties? Or to other measures of reality?<br />
2. If the report includes descriptions of internal sensations, how do these compare to what we know about the physical basis for the senses?<br />
3. If the report includes an explanation or reason for the subject's experience, how does that compare to various existing theories and studies of behavior?</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">
So in the case of the blind spot, I think most people will not report any blind spot, unless they follow a specific procedure, by staring at one point while moving another point that is off to the side closer or farther away until it can't be seen. Physically we know there are no rods and cones at the back of the eye where the optic nerve exits. So on criteria 1, there is actually a good match with reality because there appears to bear "filling in" of the spot, likely achieved because the eyes are usually shifting around, not starring at one point, and somehow a full visual field is produced (criteria 2).</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">
Many visual illusions indicate that the perception includes features that are not really in the picture. This seems to indicate that there is construction or filling in of apparent patterns. In general I suspect this is a useful feature in dealing with the world, in particular for cases where what we are looking at is partially obscured. </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">
Now consider a case where the subject unknowingly has been given a drug that commonly causes hallucinations. The subject reports that the furniture appears to be melting. Here there is an incorrect match with reality. If the subject reports that he may be "losing his mind", hopefully the observer will let them know that in fact the experience is due to a drug and will end soon. The subject did not really know the reason for the experience.<br />
While the subject's report about the outside world is clearly wrong, I don't think we can say that the subject's internal experience is wrong or untrue. In this case we know the drug has the neurochemical properties which has one effect of altering the subject's experience.</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">
In other cases of anomalous internal experiences, like a near death experience or an out-of-body experience, we can say that other observers in the immediate area could not detect any outside (i.e. real world) trace of it, but not that the report is wrong per se. I think it's worth trying to both explain how such experiences can occur, and whether such experiences are a result of and/or can result in physical changes (such as neuronal rewiring).</div>
Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0tag:blogger.com,1999:blog-2413886233271417209.post-60886232737387311032013-05-26T14:42:00.000-07:002013-05-26T14:42:02.496-07:00Jaron Lanier on social media from <a href="http://www.jaronlanier.com/gadgetwebresources.html" target="_blank">You Are Not a Gadget</a>, his 2010 book. Just found this bit intriguing, though I'm not sure I fully buy it:<br />
<blockquote>
Children want attention. Therefore, young adults, in their newly extended childhood, can now perceive themselves to be finally getting enough attention, through social networks and blogs. Lately, the design of online technology has moved from answering this desire for attention to addressing an even earlier developmental stage.
<br />
<br />
Separation anxiety is assuaged by constant connection. Young people announce every detail of their lives on services like Twitter not to show off, but to avoid the closed door at bedtime, the empty room, the screaming vacuum of an isolated mind. (p. 180)</blockquote>
Curthttp://www.blogger.com/profile/04030128899093351465noreply@blogger.com0