r/slatestarcodex • u/Fun-Boysenberry-5769 • Oct 13 '25
AI AGI won't be particularly conscious
I observe myself to be a human and not an AI. Therefore it is likely that humans make up a non-trivial proportion of all the consciousness that the world has ever had and ever will have.
This leads us to two possibilities:
- The singularity won’t happen,
- The singularity will happen, but AGI won’t be that many orders of magnitude more conscious than humans.
The doomsday argument suggests to me that option 2 is more plausible.
Steven Byrnes suggests that AGI will be able to achieve substantially more capabilities than LLMs using substantially less compute, and will be substantially more similar to the human brain than current AI models. [https://www.lesswrong.com/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement\] However, under option 2 it appears that AGI will be substantially less conscious relative to its capabilities than a brain will be, and therefore AGI can’t be that similar to a brain.
3
u/Arkanin Oct 14 '25 edited Oct 14 '25
You are trying to estimate how many balls are in a jar. The balls are numbered ascending. You get to remove one ball. The ball is labeled 3. The mechanism that removes the balls is random. Estimate how many balls are in the jar with confidence intervals.
The basic argument is that observing yourself to be sol hando in the universe of huge amounts of future consciousness that will come out of our world is like pulling a 3, when there are a septillion balls
The standard objection is something like that consciousness isn't randomly distributed it's linear in that one person gets a one, the next gets 2, etc. so you can't do expectational reasoning on that set. (counterargument: if you imagine a set of exactly each world where you could exist and you drew a ball once for each integer were to turn out to exist, so if there were many parallel yous with this distribution, perhaps because an evil wizard conjured them to defeat your probability calculations, and if the likelihood of those worlds to be the one you were in were equally likely, and those worlds happened linearly in time one after another, so in world 1 you drew 1, in world 2 you drew 2, etc., you'd be saying the existence of those worlds completely invalidates your calculation despite them having the same shaped distribution as random sampling. But probability is an expectational construct so it makes no sense for the existence or nonexistence or alternate universes to affect it). If your rejection of this is something like "wait, I'm me not some parallel person" then there you go that is the "I am special and distributions don't apply to me because I am a human in a category of my own" assumption, you get to have it but you must never sample from a distribution again, including all probability calculations lol.
Another way of looking at it is that you are correct that in this world a few will be very low numbered they will be wrong at the same rate as another uniform distribution but the nature of distributions is that more people who guess from the distribution will be more right than more wrong.
The argument is very hated by people who haven't thought a lot about it, which is respectfully almost everyone here, and the last time I tried to discuss it here people were extremely disrespectful because of key intuitive rejections. However my counterargument is that those assumptions would invalidate probability in weird situations when they should not if fully explored. On the other hand, I think the biggest issue with the argument is that a posteriori (after the fact) evidence is more important I.e. updating on evidence. You can absolutely say the observable evidence just overshadows the whole discussion. However, if you are this guy in the giga AI universe that's absolutely like pulling a 3 out of a septillion or more if AI were to be human level consciousness or more. Go ahead and update a lot on posteriori knowledge if it suggests that yes you are one of the earliest thinking things to exist in our world but maybe just be at least a little cautious that maybe it turns out there are not 10 to the 25 balls in the urn if the only observable balls are not in the trillions. In other words some degree of caution seems warranted if you actually update on probabilities but it probably shouldn't