Discussion about this post

User's avatar
Luke's avatar

Re: "Doing valuable work in this area requires a willingness to turn theoretical research into practical frameworks that can be used to estimate the likelihood of consciousness and sentience in ML systems."

In the continued absence of a convincing mechanistic theory of phenomenal consciousness, one could develop a long list of "potential indicators of consciousness," give each indicator a different evidential weight, catalogue which classes of ML systems exhibit which indicators, and use this to produce a (very speculative!) quantitative estimate for the likelihood of phenomenal consciousness in one class of ML systems vs. another.

Overlapping lists of potential indicators of consciousness that have been proposed in the academic literature are here:

https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood#PCIFsTable

https://rethinkpriorities.org/invertebrate-sentience-table

Of course, in addition to the question of "likelihood of being phenomenally conscious at all," there is the issue that some creatures (and ML systems) may have more "moral weight" than others, e.g. due to differing typical intensity of experience in response to canonical stimuli, differing "clock speeds" (# of subjectively distinguishable experiential moments per objective second), and various other factors that one might intuitively think are morally relevant. I sketched some initial thoughts at the link below, which could potentially be applied to the analysis of different classes of ML models:

https://www.lesswrong.com/posts/2jTQTxYNwo6zb3Kyp/preliminary-thoughts-on-moral-weight

Jason Schukraft at Rethink Priorities is leading some projects building on this past work.

AI/End Of The World's avatar

Would love to read today's version of this. Progression has been somewhat extensive over the last 4 years 😅

12 more comments...

No posts

Ready for more?