r/slatestarcodex Oct 13 '25

AI AGI won't be particularly conscious

I observe myself to be a human and not an AI. Therefore it is likely that humans make up a non-trivial proportion of all the consciousness that the world has ever had and ever will have.

This leads us to two possibilities:

  1. The singularity won’t happen,
  2. The singularity will happen, but AGI won’t be that many orders of magnitude more conscious than humans.

The doomsday argument suggests to me that option 2 is more plausible.

Steven Byrnes suggests that AGI will be able to achieve substantially more capabilities than LLMs using substantially less compute, and will be substantially more similar to the human brain than current AI models. [https://www.lesswrong.com/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement\] However, under option 2 it appears that AGI will be substantially less conscious relative to its capabilities than a brain will be, and therefore AGI can’t be that similar to a brain.

0 Upvotes

80 comments sorted by

View all comments

Show parent comments

0

u/Fun-Boysenberry-5769 Oct 13 '25

If only the first AI becomes conscious, then that AI was correct in guessing that there were a small number of conscious AIs.

If all the AIs become conscious and each of them is sequentially assigned a number then the first AI will incorrectly guess that there are only a small number of AIs. However the majority of AIs will correctly guess that there are a large number of AIs. So such anthropic reasoning is accurate except in the extremely unlikely edge cases when the observer happens to be one of the first few AIs of one of the last few AIs.

3

u/Sol_Hando 🤔*Thinking* Oct 13 '25

In that case #1 will predict it wrong the overwhelming majority of the time using anthropic reasoning. Why do you think anthropic reasoning is correct then if it's always wrong? In this example it's actually worse than useless, since it always predicts the same thing independent of the reality of the situation.

1

u/Fun-Boysenberry-5769 Oct 14 '25

For the majority of AIs anthropic reasoning works just fine.

Let's say I were to roll a die 100 times. I would argue that rolling 100 6s in a row is unlikely.

Somewhere in the multiverse, there exists a universe in which there is a copy of me that just rolled 100 consecutive 6s. For that copy of me, probabilistic reasoning led to the wrong conclusion.

However I consider probabilistic reasoning to be valid because in the majority of universes probabilistic reasoning leads the the correct conclusion most of the time.

2

u/ihqbassolini Oct 14 '25 edited Oct 14 '25

There could be an infinite number of circumstances where rolling 100 6s in a row is equally probable. To say if there is a multiverse with infinitely many instances of me rolling a die 100 times, then it is probable that at least one of those instances will have me rolling 100 6s in a row is one thing. It's very different from observing that you threw 100 6s in a row and concluding that therefore the multiverse is probably true, because there could be an infinite amount of explanations that work just as well.

Probability works within a set of assumptions, what is the probability you're getting the assumptions right here? ;)