Science is Approaching the Soul

A little while ago OpenAI announced o3, a new (and extremely expensive) LLM. There’s a lot to say about its new capacities in a variety of domains, but the one relevant here is its performance on the ARC Challenge, a measure of general intelligence.  Without boring you about the technical details, previous LLMs have done quite poorly at such measures of general intelligence, but o3 is now scoring at human-level. 

So have we achieved the vaunted “artificial general intelligence”? Not quite, there are other general intelligence tests that it still doesn’t do quite so well on, but the writing is on the wall as we whittle away at those and even some of the “AI is just glamorous autocomplete” folks are squirming a bit at a system that can, for example, solve unique, bespoke math problems that would take a professional mathematician a day to solve, and scores among the top professional coders in the world. 

We’re closing in on being able to replicate any kind of human-level general purpose reasoning, creating what is known as a philosophical zombie, an entity that can act and respond like a human would to different stimuli, but who doesn’t have an internal sensory experience. 

And this is where things get interesting, because the philosophical zombie has been at the center of the philosophy debate about consciousness and, in a sense, the soul. Suddenly all sorts of esoteric philosophy questions will become quite relevant in our day-to-day, and I think a religious perspective that postulates a soul as the repository of the self will come out a winner, if not the winner as the orthodox, as the mainstream naturalist position is increasingly found wanting. 

Why? Consciousness is famously one of the hardest problems in philosophy and science. Among these discussants is a group of prominent thinkers like Daniel Dennett (RIP) and the Churchlands who argue that what we label consciousness and internal subjective experience, which according to some takes doesn’t even really exist, is the result of what is essentially a meat computer with enough power and the right software and hardware.

But as we are increasingly able to replicate the computing part of the human brain we are arguably getting no closer to the internal sense experience even if we are close to having chipped away everything that is not consciousness, which remains stubbornly intractable. This isn’t surprising to me. Like many non-naturalists I think it is a category error to attribute internal experience to the mechanics of atoms bouncing against and with each other. Unsurprisingly, I don’t think that consciousness can simply arise out of faster supercomputer and larger neural network; that it’s an emergent property of raw compute. I don’t think, fundamentally, you can get from bouncing particles and electrical charges to self-awareness, no matter how complex your cognitive gears and pulleys are. 

Chat-GPT cannot feel any more pain than an abacus just because it’s more complex. As our electronic calculators become more sophisticated and more powerful than the human brain, with the internal experience nowhere to be found, it is going to be increasingly difficult to argue that adding more RAM is going to make Chat-GPT 20 feel pain or, famously, experience the sensation of the color red even if knows everything about the color red. (Although it’s worth noting that Latter-day Saint theology that the soul is the body and spirit of man is perhaps more friendly to a sort of hybrid perspective on the mind/body problem than some). 

The common argument that internal sense experience (or “qualia”) is a mirage seems patently ridiculous, and is a testament to the idea that there are some things so stupid you need a PhD to believe them (maybe I should be nicer, but Dennett was similarly flippant about any ideas that didn’t accord with his). If anything, the fact that I feel is the one thing I am sure about (“I think, therefore I am” and all that). But fine, if we’re going to take that perspective at face value, then its proponents need to have the courage of their convictions and, once OpenAI can pass every artificial general intelligence test we throw at it, insist that we give Chat-GPT human rights, since there is no real difference between it and us at that point according to their framework. 

Of course, that’s ridiculous. Chat-GPT doesn’t feel, and we do. And it’s interesting to me that there is very little discussion in the naturalist camp of Chat-GPT being a human being even though we blew past the Turing Test years ago. We’re moving to the realm where thought experiments are becoming a reality and it’s becoming awkward for people who thought that philosophical zombies would safely remain an abstract idea in a lecture hall. 

There is the chance that if we understood the brain mechanics a little bit better we could find consciousness in the biochemistry equations of the brain, but as we get better and better at replicating the brain’s functions this will ironically lead to a sort of “science of the gaps” situation as the gap becomes smaller and we’re relying more on, well, faith that consciousness is somewhere in the shrinking gap. (IMHO there’s a similar situation with fine tuning in physics or origin-of-life research). Like the aether (or biological vitalism, ironically), after the umpteenth attempt to reproduce it fails one has to start to ask whether the consciousness is really reducible to synaptic mechanics. 

Of course, all of this doesn’t mean that everybody is going to rush out and get baptized in some faith or another, as there are non-”religious” options that allow for something beyond raw atomic mechanics. Even famous anti-religionist Sam Harris is sympathetic to some versions of panpsychism, where the universe has a sort of consciousness (although I’m not sure why that wouldn’t qualify as “God”).  David Chalmers, the main opponent of the Churchills/Dennett crowd who believes that consciousness is a fundamental facet of reality, does not identify as religious, although with him too it would be hard to look at his thought and not identify it as “spiritual.”

Still, people who see the relationship between scientific discoveries and faith as simply one of science whittling away at phenomena traditionally explained by faith are living in the 19th century. Science is revealing more questions than answers and showing the limits of a mechanistic universe: the original primordial cell was much more, not less, complicated than Darwin believed, the fact that the universe has parameters seemingly precisely tuned for life had found widespread acceptance, and to top it all off the universe “spookily” knows when we’re looking at it according to quantum mechanics. I wouldn’t be surprised if, over the next decade or two, our inability to replicate self-awareness will start to make people take the idea of a soul more seriously and will add another data point that tantalizingly suggests that there is much more underlying this universe, metaphysically and fundamentally, than raw particles bouncing off of each other.  


Comments

13 responses to “Science is Approaching the Soul”

  1. Thank you for this.

  2. Last Lemming

    I’m not sure you are on the same page with Joseph Smith here. Joseph was 100% a materialist (D&C 131:6-7):

    There is no such thing as immaterial matter. All spirit is matter, but it is more fine or pure, and can only be discerned by purer eyes;

    We cannot see it; but when our bodies are purified we shall see that it is all matter.

    Perhaps out “impurity” will prevent us from uncovering our spirits in the short and medium terms, but if or when we overcome that impurity in the long term, you can bet that we will piggyback on all the AI research that is being done now.

  3. Related to Last Lemming’s comment, it seems to me that with Joseph Smith’s claim that spirit is matter, all we’ve done is push the seat of conscientious from “this” matter to “that”. We are still left with the question if how matter gives rise to consciousness.

  4. *question of

  5. I do think that at some point I don’t even begin to understand the physical, the metaphysical, and the moral all tie together (or else it’s pretty lucky that the omnipotent God of the universe also happens to be a paragon of virtue), but that happens at a point much higher than atom A, B, and C connected in the right way and now they are aware of each other, so while I think there is some of that (e.g. maybe consciousness and spirit is part of this fundamental aspect of the universe that is “refined matter”) it’s operating at a much higher/abstract level than the college-level biochemistry that Dennett et al., thinks is giving rise to consciousness.

  6. Last Lemming,

    I think Joseph Smith categorizes intelligence as something different from element. And if so, then the primary question may have more to do with discovering if there’s a fundamental difference between spirit and intelligence rather than spirit and matter.

  7. Consciousness is the summing node of existence. It is a byproduct of the ability to choose the best choices for survival in a brutal and oppositional world. That ChatGPT might mimic consciousness… It is but a pale reflection, a mimic, a myna bird repeating what has been said. But the real myna bird knows how to adapt in adverse circumstances, survive new and complex situations.

    The myna bird has intelligence that descends deep into the DNA, each cell, each mitochondrion. The bird has 3 billion years of experience in its consciousness. It has taken 13 billion years to create its perception.

  8. “But as we are increasingly able to replicate the computing part of the human brain…”

    But that’s not what we’re doing. AI researchers have produced tools that are remarkably good at imitating the *output* of the human brain–good enough that I really wish our society had functional governance that was capable of thinking through the implications and responding intentionally. But the way LLMs and similar AIs work does not resemble how the human brain works at all.

    That gives the materialists an out: “No, all our progress in LLMs has not led given them consciousness, but they’re nothing like the human brain. If we could replicate how the human brain works electronically, then I’m sure we could create consciousness too.”

  9. Last Lemming

    Jack,

    We really don’t know how Joseph distinguished between “intelligences” and “spirits”. (Most people in these parts seem not to acknowledge any difference at all.) Personally, I view the difference as analogous to potential vs. kinetic energy. And I think what Stephen insists AI will never replicate is the kinetic aspect.

    RLD,

    Great point.

  10. Last Lemming,

    I’m in B. H. Roberts camp on that question. I’m sure you’re already aware of his thoughts on the subject–but I’m gonna blab anyway. He boiled Joseph’s interchangeable use of both terms down to how they were defined in 1820-40. Where Roberts did believe there was a difference between the two was in the notion of spirit embodiment–and I’m pretty-much on board with him there too.

    That said, I lean very strongly in the direction of human beings being fully conscious intelligences before entering their first estate. And it’s that center of perception and awareness (IMO) that AI will never be able to replicate.

  11. @RLD: I’m not a neuroscientist, but my understanding is that we’re really not quite sure how the human brain works on that level, and that saying it’s pattern recognition with some hard-coded Chomskian reasoning elements is a respectable position to take. In a sense neural networks are patterned after how they thought brains worked a decade ago, with the parameters in a neural network analogous to the synapses in a brain. (Sorry if it sounds like I’m talking down to you, comment-readers might need some background). With this latest release it’s increasingly looking like zero-shot “thinking” by AI is becoming a thing, so the materialists can always say that it’s not *really* thinking like a brain, but as AGI benchmarks fall I think that will seem like an increasingly desperate claim.

  12. I’m not a neuroscientist either, or an AI expert–just a stats geek who has dug into deep learning a little to try to help researchers who want to use it for analysis. (Not counting the grad students I’ve had to set straight because they thought ChatGPT would let them avoid learning to code.) And I’m probably putting in more background than you need for the same reason you did.

    The idea of neural networks has been around since at least the 1950s. The behavior of a single neuron seemed simple enough to model mathematically, and the idea was that a computer simulating a network of them might behave more like a human brain. But nothing ever came of it, and when machine learning researchers started using neural networks again a couple of decades ago they were interested in neural networks because of their mathematical properties, not because of any supposed similarity to the brain.

    I can’t tell you whether the nodes in a modern AI neural net are a decent simulation of a neuron or not. But they’re not connected together at all like neurons in the brain. In the brain, neurons are connected in complex, seemingly chaotic ways. Neurons can fire at any time, and signals go in all directions. Some of the activity gets organized into “waves” which seem to have some relationship with awareness, given that they change when we sleep or are unconscious. In a modern AI neural net, the nodes are organized into layers, and “signals” only go from one layer to the next. There is no equivalent of brain waves. It’s all very structured and relatively simple–you can draw diagrams that describe all the connections.

    So if your argument is that awareness is an emergent phenomenon arising from complex interactions between signals sent over the 100 trillion unstructured connections between neurons in the human brain, then you wouldn’t expect anything of the sort to happen with current forms of AI, no matter how good they get at imitating the results of awareness. Especially if you think the waves are important. No matter how similar the results are, the mechanisms are fundamentally different.

  13. Stephen, I really enjoyed this post because my thinking has been going the same way and I was wondering why it wasn’t being talked about more. I think I’d add that even if we do figure out that consciousness is an output of our physical brain activity, it will require us to model this new thing that simply does not exist anywhere that I can see in the standard model of physics. So a huge change in our understanding of the world no matter the outcome.