People have been asking if current ML systems might be conscious. I think overly strong answers to this in both directions include "no" and "sure but so might atoms" as well as almost any variant of "yes". Here I'll try to give a sense of my own views of machine consciousness, what they're grounded in, and how much I think this question matters.
Re: "Doing valuable work in this area requires a willingness to turn theoretical research into practical frameworks that can be used to estimate the likelihood of consciousness and sentience in ML systems."
In the continued absence of a convincing mechanistic theory of phenomenal consciousness, one could develop a long list of "potential indicators of consciousness," give each indicator a different evidential weight, catalogue which classes of ML systems exhibit which indicators, and use this to produce a (very speculative!) quantitative estimate for the likelihood of phenomenal consciousness in one class of ML systems vs. another.
Overlapping lists of potential indicators of consciousness that have been proposed in the academic literature are here:
Of course, in addition to the question of "likelihood of being phenomenally conscious at all," there is the issue that some creatures (and ML systems) may have more "moral weight" than others, e.g. due to differing typical intensity of experience in response to canonical stimuli, differing "clock speeds" (# of subjectively distinguishable experiential moments per objective second), and various other factors that one might intuitively think are morally relevant. I sketched some initial thoughts at the link below, which could potentially be applied to the analysis of different classes of ML models:
Really enjoyed this article - thank you for such a clear discussion!
I'm curious if you think that we should consider not training certain ML systems, if:
1) There's enough probability that the system would experience suffering, and/or
2) The extent of the potential suffering is great
Some frameworks for decision-making under uncertainty use expected value to choose moral actions, and I'm curious if you think those frameworks (or others) suggest that we shouldn't train certain ML systems?
Great article. You really bring home how important it is to get our philosophy of consciousness sorted out before conscious machines arrive, and we're running out of time. I'm of the view that LLMs have zero consciousness, btu eventually AIs will be completely conscious in all the ways that matter.
I've been thinking and writing a lot about consciousness lately, and I think you might be interested in an insidious conceptual conflation that I have written about recently. It affects our use of the term "phenomenal consciousness", which is used in 2 or 3 very different senses, and your summary of the field reflects those conflated uses in the literature.
One of the meanings of "phenomenal consciousness" (Σ) has functional architectures within its scope, and it will make sense to ask whether an AI has the right architecture for that sort of phenomenal consciousness.
The other popular meaning refers to the non-architectural special essence (Δ) that can be disputed by two people who have full information about the architectural, functional matters, but still disagree on whether that architecture is accompanied by some special extra feeling. It will never make sense to talk about the necessary or sufficient conditions for Δ, and it will be possible to argue that Δ is missing when confronted with a fully conscious AI.
Ultimately, a failure to distinguish between these two uses leads to massive confusion. Obsessive focus on the imagined importance of the Δ type of phenomenal consciousness will undermine efforts to characterise the architecture that is relevant for moral patient-hood and agent-hood.
This conflation is easy to spot when you have been sensitised to it, and I found myself flipping between Σ and Δ as i was reading your post.
I personally believe that the conflation itself is explicable, and it can be traced back to complicated use-mention confusion within the brain's understanding of itself. But the first step is refraining from using the same term, "phenomenal consciousness" to mean two very different things.
The idea that ML systems might one day possess a form of consciousness similar to humans is both exciting and terrifying. It brings up a ton of ethical questions, especially when it comes to treating these systems as moral entities.
I recently read another article on Machine Minds (https://www.cortexreport.com/machine-minds-the-quest-to-decode-ai-consciousness/) that delves into the journey of decoding AI consciousness. It's interesting to see different perspectives on this topic and how researchers are trying to bridge the gap between human and machine understanding.
What are your thoughts? Do you think we'll ever reach a point where machines will have their own "inner cinema" or experiences? And if so, how should we ethically treat them?
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Thanks Amanda - great piece. It's also important to ask whether granting moral consideration to sentient AI entities would actually benefit them that much. Most people claim they grant moral consideration to sentient non-human animals - and yet animal farming & fishing still exist - largely for transient human pleasures and social norm compliance. I'm not sure that type of "consideration" is even worth having.
In that light, you might find https://sentientism.info/faq interesting: "evidence, reason and compassion for all sentient beings." Would love to have you on as a Sentientism podcast guest too if it ever fits your plans. Jamie.
Re: "Doing valuable work in this area requires a willingness to turn theoretical research into practical frameworks that can be used to estimate the likelihood of consciousness and sentience in ML systems."
In the continued absence of a convincing mechanistic theory of phenomenal consciousness, one could develop a long list of "potential indicators of consciousness," give each indicator a different evidential weight, catalogue which classes of ML systems exhibit which indicators, and use this to produce a (very speculative!) quantitative estimate for the likelihood of phenomenal consciousness in one class of ML systems vs. another.
Overlapping lists of potential indicators of consciousness that have been proposed in the academic literature are here:
https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood#PCIFsTable
https://rethinkpriorities.org/invertebrate-sentience-table
Of course, in addition to the question of "likelihood of being phenomenally conscious at all," there is the issue that some creatures (and ML systems) may have more "moral weight" than others, e.g. due to differing typical intensity of experience in response to canonical stimuli, differing "clock speeds" (# of subjectively distinguishable experiential moments per objective second), and various other factors that one might intuitively think are morally relevant. I sketched some initial thoughts at the link below, which could potentially be applied to the analysis of different classes of ML models:
https://www.lesswrong.com/posts/2jTQTxYNwo6zb3Kyp/preliminary-thoughts-on-moral-weight
Jason Schukraft at Rethink Priorities is leading some projects building on this past work.
Really enjoyed this article - thank you for such a clear discussion!
I'm curious if you think that we should consider not training certain ML systems, if:
1) There's enough probability that the system would experience suffering, and/or
2) The extent of the potential suffering is great
Some frameworks for decision-making under uncertainty use expected value to choose moral actions, and I'm curious if you think those frameworks (or others) suggest that we shouldn't train certain ML systems?
Great article. You really bring home how important it is to get our philosophy of consciousness sorted out before conscious machines arrive, and we're running out of time. I'm of the view that LLMs have zero consciousness, btu eventually AIs will be completely conscious in all the ways that matter.
I've been thinking and writing a lot about consciousness lately, and I think you might be interested in an insidious conceptual conflation that I have written about recently. It affects our use of the term "phenomenal consciousness", which is used in 2 or 3 very different senses, and your summary of the field reflects those conflated uses in the literature.
One of the meanings of "phenomenal consciousness" (Σ) has functional architectures within its scope, and it will make sense to ask whether an AI has the right architecture for that sort of phenomenal consciousness.
The other popular meaning refers to the non-architectural special essence (Δ) that can be disputed by two people who have full information about the architectural, functional matters, but still disagree on whether that architecture is accompanied by some special extra feeling. It will never make sense to talk about the necessary or sufficient conditions for Δ, and it will be possible to argue that Δ is missing when confronted with a fully conscious AI.
Ultimately, a failure to distinguish between these two uses leads to massive confusion. Obsessive focus on the imagined importance of the Δ type of phenomenal consciousness will undermine efforts to characterise the architecture that is relevant for moral patient-hood and agent-hood.
This conflation is easy to spot when you have been sensitised to it, and I found myself flipping between Σ and Δ as i was reading your post.
I personally believe that the conflation itself is explicable, and it can be traced back to complicated use-mention confusion within the brain's understanding of itself. But the first step is refraining from using the same term, "phenomenal consciousness" to mean two very different things.
The idea that ML systems might one day possess a form of consciousness similar to humans is both exciting and terrifying. It brings up a ton of ethical questions, especially when it comes to treating these systems as moral entities.
I recently read another article on Machine Minds (https://www.cortexreport.com/machine-minds-the-quest-to-decode-ai-consciousness/) that delves into the journey of decoding AI consciousness. It's interesting to see different perspectives on this topic and how researchers are trying to bridge the gap between human and machine understanding.
What are your thoughts? Do you think we'll ever reach a point where machines will have their own "inner cinema" or experiences? And if so, how should we ethically treat them?
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Thanks Amanda - great piece. It's also important to ask whether granting moral consideration to sentient AI entities would actually benefit them that much. Most people claim they grant moral consideration to sentient non-human animals - and yet animal farming & fishing still exist - largely for transient human pleasures and social norm compliance. I'm not sure that type of "consideration" is even worth having.
In that light, you might find https://sentientism.info/faq interesting: "evidence, reason and compassion for all sentient beings." Would love to have you on as a Sentientism podcast guest too if it ever fits your plans. Jamie.