5 Comments
Feb 21, 2022Liked by Amanda Askell

Re: "Doing valuable work in this area requires a willingness to turn theoretical research into practical frameworks that can be used to estimate the likelihood of consciousness and sentience in ML systems."

In the continued absence of a convincing mechanistic theory of phenomenal consciousness, one could develop a long list of "potential indicators of consciousness," give each indicator a different evidential weight, catalogue which classes of ML systems exhibit which indicators, and use this to produce a (very speculative!) quantitative estimate for the likelihood of phenomenal consciousness in one class of ML systems vs. another.

Overlapping lists of potential indicators of consciousness that have been proposed in the academic literature are here:

https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood#PCIFsTable

https://rethinkpriorities.org/invertebrate-sentience-table

Of course, in addition to the question of "likelihood of being phenomenally conscious at all," there is the issue that some creatures (and ML systems) may have more "moral weight" than others, e.g. due to differing typical intensity of experience in response to canonical stimuli, differing "clock speeds" (# of subjectively distinguishable experiential moments per objective second), and various other factors that one might intuitively think are morally relevant. I sketched some initial thoughts at the link below, which could potentially be applied to the analysis of different classes of ML models:

https://www.lesswrong.com/posts/2jTQTxYNwo6zb3Kyp/preliminary-thoughts-on-moral-weight

Jason Schukraft at Rethink Priorities is leading some projects building on this past work.

Expand full comment

Really enjoyed this article - thank you for such a clear discussion!

I'm curious if you think that we should consider not training certain ML systems, if:

1) There's enough probability that the system would experience suffering, and/or

2) The extent of the potential suffering is great

Some frameworks for decision-making under uncertainty use expected value to choose moral actions, and I'm curious if you think those frameworks (or others) suggest that we shouldn't train certain ML systems?

Expand full comment

The idea that ML systems might one day possess a form of consciousness similar to humans is both exciting and terrifying. It brings up a ton of ethical questions, especially when it comes to treating these systems as moral entities.

I recently read another article on Machine Minds (https://www.cortexreport.com/machine-minds-the-quest-to-decode-ai-consciousness/) that delves into the journey of decoding AI consciousness. It's interesting to see different perspectives on this topic and how researchers are trying to bridge the gap between human and machine understanding.

What are your thoughts? Do you think we'll ever reach a point where machines will have their own "inner cinema" or experiences? And if so, how should we ethically treat them?

Expand full comment

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment

Thanks Amanda - great piece. It's also important to ask whether granting moral consideration to sentient AI entities would actually benefit them that much. Most people claim they grant moral consideration to sentient non-human animals - and yet animal farming & fishing still exist - largely for transient human pleasures and social norm compliance. I'm not sure that type of "consideration" is even worth having.

In that light, you might find https://sentientism.info/faq interesting: "evidence, reason and compassion for all sentient beings." Would love to have you on as a Sentientism podcast guest too if it ever fits your plans. Jamie.

Expand full comment