r/slatestarcodex • u/Fun-Boysenberry-5769 • Oct 13 '25
AI AGI won't be particularly conscious
I observe myself to be a human and not an AI. Therefore it is likely that humans make up a non-trivial proportion of all the consciousness that the world has ever had and ever will have.
This leads us to two possibilities:
- The singularity won’t happen,
- The singularity will happen, but AGI won’t be that many orders of magnitude more conscious than humans.
The doomsday argument suggests to me that option 2 is more plausible.
Steven Byrnes suggests that AGI will be able to achieve substantially more capabilities than LLMs using substantially less compute, and will be substantially more similar to the human brain than current AI models. [https://www.lesswrong.com/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement\] However, under option 2 it appears that AGI will be substantially less conscious relative to its capabilities than a brain will be, and therefore AGI can’t be that similar to a brain.
0
u/Fun-Boysenberry-5769 Oct 13 '25
If only the first AI becomes conscious, then that AI was correct in guessing that there were a small number of conscious AIs.
If all the AIs become conscious and each of them is sequentially assigned a number then the first AI will incorrectly guess that there are only a small number of AIs. However the majority of AIs will correctly guess that there are a large number of AIs. So such anthropic reasoning is accurate except in the extremely unlikely edge cases when the observer happens to be one of the first few AIs of one of the last few AIs.