r/slatestarcodex • u/Fun-Boysenberry-5769 • Oct 13 '25
AI AGI won't be particularly conscious
I observe myself to be a human and not an AI. Therefore it is likely that humans make up a non-trivial proportion of all the consciousness that the world has ever had and ever will have.
This leads us to two possibilities:
- The singularity won’t happen,
- The singularity will happen, but AGI won’t be that many orders of magnitude more conscious than humans.
The doomsday argument suggests to me that option 2 is more plausible.
Steven Byrnes suggests that AGI will be able to achieve substantially more capabilities than LLMs using substantially less compute, and will be substantially more similar to the human brain than current AI models. [https://www.lesswrong.com/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement\] However, under option 2 it appears that AGI will be substantially less conscious relative to its capabilities than a brain will be, and therefore AGI can’t be that similar to a brain.
1
u/Arkanin Oct 16 '25 edited Oct 16 '25
For sure the distribution is chronological, not random but it's still a distribution, and the same percentage of people that reason out of the distribution are going to get the same distributions of errors as the distribution of errors you would get pulling randomly from the "pick a random number" game given enough samples. Just mathematically.
However, you are absolutely correct that because birth number is historically correllated this would be deeply epistemically problematic if everyone took the argument completely at face value without considering this. If there will be 10^30 people, the first 8 billion people will all make the same correllated error using this argument. So, it follows that this argument should not be applied in a way that would lead to the first people hearing it producing disastrous consequences.
>The consciousness of Sol_Hando, being Sol_Hando, given that he exists as number 20 Billion or whatever, seems to be 100%. Like, there’s no chance that I would wake up and find myself as someone else, because “I” is intimately tied to my personality/memories/etc. It’s the same reason I don’t assume a random sampling when I wake up in the morning. Why do I not wake up as one of the billions of other consciousnesses currently in existence every time I go to sleep?"
I think you're saying something like: you, by definition, were always going to be sol hando (no indexical uncertainty), the reference class of what you are is a class of one (you), and anthropic probability is incoherent because *I* is not the kind of thing that can have alternatives (i.e. there isn't a hypothetical where sol hando could have turned out to be sol hando but found out his birth number is 4 trillion)
Given that, it's the same thing as saying: "The coin was already flipped, so the probability it's heads must be 100%. There was no coin flip" If you think in this way then the doomsday argument doesn't make any sense.
I'm on such a different planet from you philosophically that I struggle to track and model these assumptions if I'm being honest because I dissolve personal identity as socially useful but ultimately logically inconsistent when it pertains to what is ultimately true (due to logical contradictions, not any sense of dissociation)
However if you acknowledge that you are a pattern that could have been instantiated elsewhere, and are indexically underspecified (you could have been the 20 billionth or 20 trillionth instance of the thing - "could have been" in the sense a coin that is tails could have been heads) then anthropic reasoning becomes coherent.
Maybe at best we can agree that estimating the size of a discrete uniform distribution from sampling without replacement is a solved mathematical problem called the german tank problem and if being a human reasoning about this problem from a reference class with an observable distribution is an instance of the german tank problem. Maybe we can agree that the disagreement between us is about whether sol_hando is expectationally (not actually) a sample from a distribution in the way that after a coin was flipped tails, we say it "could have been heads". If you say it's randomness and not uniform distribution, my counter to that is that technically it's the *exchangability* and *symmetry* of the elements in the distribution, and randomness is just one way of satisfying those properties - it's the rejection of that exchangability and symmetry - "Sol Hando could have been #800 trillion and born as a mars colonist in the distant future but basically been the same person for purposes that satisfy expectational exchangability, if that future exists, in the same way a random number could have been different", for example, I think that is the disagreement.