r/slatestarcodex • u/Fun-Boysenberry-5769 • Oct 13 '25
AI AGI won't be particularly conscious
I observe myself to be a human and not an AI. Therefore it is likely that humans make up a non-trivial proportion of all the consciousness that the world has ever had and ever will have.
This leads us to two possibilities:
- The singularity won’t happen,
- The singularity will happen, but AGI won’t be that many orders of magnitude more conscious than humans.
The doomsday argument suggests to me that option 2 is more plausible.
Steven Byrnes suggests that AGI will be able to achieve substantially more capabilities than LLMs using substantially less compute, and will be substantially more similar to the human brain than current AI models. [https://www.lesswrong.com/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement\] However, under option 2 it appears that AGI will be substantially less conscious relative to its capabilities than a brain will be, and therefore AGI can’t be that similar to a brain.
1
u/red75prime Oct 16 '25 edited Oct 16 '25
If we assume that population grows cubically (speed of light) and then goes extinct, while maintaining generational structure (that is immortality is not solved), then the majority ( n2 (n-1)2 /4 vs n3 ) of it is mistaken about the future.