Renowned neuroscientist tackled by NDE science. Artificial General Intelligence (AGI) is sidestepping the consciousness elephant..
Renowned neuroscientist tackled by NDE science.
Artificial General Intelligence (AGI) is sidestepping the consciousness elephant that isn’t in the room, the brain, or anywhere else. As we push the boundaries of machine intelligence, we will inevitably come back to the most fundamental questions about our own experience. And as AGI inches closer to reality, these questions become not just philosophical musings, but practical imperatives.
This interview with neuroscience heavyweight Christof Koch brings this tension into sharp focus. While Koch’s work on the neural correlates of consciousness has been groundbreaking, his stance on consciousness research outside his immediate field raises critical questions about the nature of consciousness – questions that AGI developers can’t afford to ignore.
Four key takeaways from this conversation:
1. The Burden of Proof in Consciousness Studies
Koch argues for a high standard of evidence when it comes to claims about consciousness existing independently of the brain. However, this stance raises questions about scientific objectivity:
“Extraordinary claims require extraordinary evidence… I haven’t seen any [white crows], so far all the data I’ve looked at, I’ve looked at a lot of data. I’ve never seen a white coal.”
Key Question: Does the demand for “extraordinary evidence” have a place in unbiased scientific inquiry, especially with regard to published peer-reviewed work?
2. The Challenge of Interdisciplinary Expertise
Despite Koch’s eminence in neuroscience, the interview reveals potential gaps in his knowledge of near-death experience (NDE) research:
“I work with humans, I work with animals. I know what it is. EEG, I know the SNR, right? So I, I know all these issues.”
Key Question: How do we balance respect for expertise in one field with the need for deep thinking about contradictory data sets? Should Koch have degraded gracefully?
3. The Limitations of “Agree to Disagree” in Scientific Discourse
When faced with contradictory evidence, Koch resorts to a diplomatic but potentially unscientific stance:
“I guess we just have to disagree.”
Key Question: “Agreeing to disagree” doesn’t carry much weight in scientific debates, so why did my AI assistant go there?
4. The “White Crow” Dilemma in Consciousness Research
The interview touches on William James’ famous “white crow” metaphor, highlighting the tension between individual cases and cumulative evidence:
“One instance of it would violate it. One two instance of, yeah, I totally agree. But we, I haven’t seen any…”
Key Question: can AI outperform humans in dealing with contradictory evidence?
Thoughts?
-=-=Another week in AI and more droning on about how superintelligence is just around the corner and human morals and ethics are out the window. Maybe not. In this episode, Alex Tsakiris of Skeptiko/Ai Truth Ethics and Ben Byford of the Machine Ethics podcast engage in a thought-provoking dialogue that challenges our assumptions about AI’s role in discerning truth, the possibility of machine consciousness, and the future of human agency in an increasingly automated world.