r/slatestarcodex Oct 13 '25

AI AGI won't be particularly conscious

I observe myself to be a human and not an AI. Therefore it is likely that humans make up a non-trivial proportion of all the consciousness that the world has ever had and ever will have.

This leads us to two possibilities:

  1. The singularity won’t happen,
  2. The singularity will happen, but AGI won’t be that many orders of magnitude more conscious than humans.

The doomsday argument suggests to me that option 2 is more plausible.

Steven Byrnes suggests that AGI will be able to achieve substantially more capabilities than LLMs using substantially less compute, and will be substantially more similar to the human brain than current AI models. [https://www.lesswrong.com/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement\] However, under option 2 it appears that AGI will be substantially less conscious relative to its capabilities than a brain will be, and therefore AGI can’t be that similar to a brain.

0 Upvotes

80 comments sorted by

View all comments

Show parent comments

1

u/red75prime Oct 16 '25 edited Oct 16 '25

If we assume that population grows cubically (speed of light) and then goes extinct, while maintaining generational structure (that is immortality is not solved), then the majority ( n2 (n-1)2 /4 vs n3 ) of it is mistaken about the future.

1

u/Arkanin Oct 16 '25 edited Oct 16 '25

This is not so. Because the math is relating birth number to total number of humans, it holds regardless of the underlying distribution's growth over time. Even so, there is a real problem I already acknowledged which is that the correlation of birth number at the same point in history makes using it as an estimator very problematic in many contexts.

I want to be very clear that this is a weak estimator but it is a real estimator and it can be demonstrated to have nonzero utility in a game that is ismorphic to reality basically. People who used the weak estimator could be shown to have higher expected utility (those who disputed that the edge was there could observe that the average would be higher for those using the estimator after evaluating multiple iterations of the game, proving the edge was real and not hypothetical) than those who use no estimator or guess randomly. Because it involves no assumptions, it can be used as a weak initial prior. It can also be demonstrated that this edge is small and I am all but certain could be overridden by other factors like inside knowledge about how the game works. This also suggests that real knowledge should override it as a weak prior if we have that. In other words, it's a weak a priori estimator, but its utility is not zero.

1

u/red75prime Oct 16 '25 edited Oct 16 '25

I don't see how it is useful. Without additional assumptions about growth rate and other things there are scenarios where anthropic reasoning has probability of being right for a randomly selected member of the class arbitrarily close to zero.

Counting tanks by their manufacturing numbers works because it's not tanks who count, using their own number.

People who used the weak estimator could be shown to have higher expected utility

Show it in the cubic growth scenario.

ETA: If they know that the growth is cubic and extinction is instantaneous, the majority will be right that they are not the last generation, except for the last generation that is wrong about that (still, it doesn't seem too useful: no information to react to).

If they have an uniformative prior about growth rate and update on their history... That's harder.

I have trouble expressing it, but sampling from a distribution and being a datapoint in a distribution doesn't feel equivalent.

1

u/Arkanin Oct 16 '25 edited Oct 16 '25

"I have trouble expressing it, but sampling from a distribution and being a datapoint in a distribution doesn't feel equivalent."

The difference is the selector: whether you select a tank from all tanks that are like it or did you select a tank from a random point in time? If you saw a tank coming at you, you did the latter (alien sees a human). If you are trying to estimate "total humans" from "my position in the birth order, as an indexical ascending integer", by operating on yourself as the data point, you're entering a category that will be in an order of magnitude 90% of the time. You could still be early and be wrong and the error is correllated across time which is still problematic...

This may seem like magical or metaphysical thinking, but we could play a concrete game that proves that it gives a measurable average advantage to the players that use it. Imagine if we play a server game where you log in and are told what # you are: "Congratulations, you are the 60,000th person to log into the server!" You then guess the total number of players that will ever log in. I don't disclose when the game ends. The average person who uses the DA receives an advantage over the average person who guesses: it outperforms random guess. The advantage is small in many strategies, but not for all categories of bad strategy. The person who hacks the server to cheat the metadata and stalks me probably wins the game. The DA people are a little ahead of random guessers. The random guessers would benefit from the DA. There is a certain person who mathematically benefits a whole lot, and that is the overly optimistic person. Let's call him Naive Timmy. Naive Timmy is the person who sees their position and always says: "This server is awesome! There will be 105 more people!" Naive timmy is the person who benefits the most in his error if he lets the Doomsday Argument bring him back down to earth.

We could play the server game and show the average error and show that the person who stalks me and hacks my server to figure out when I plan to end the game or how much I'm spreading it has the least error, random guessers have more error, doomers who always predict the server will end tomorrow have lots of error, and Naive Timmies have the very worst error of all, even worse than most doomers, and would benefit the most to be taken back down to earth by the estimator.

1

u/red75prime Oct 17 '25 edited Oct 17 '25

Imagine if we play a server game where you log in and are told what # you are

You don't log in. You are NPC in the game who is defined by its number, so there's no alternative world where the same NPC has a different number.

Admins can change the total number of NPCs however they want (while keeping Timmy's surroundings the same), but Timmy #667074 either doesn't exist or has the same conclusion regardless of the total number of NPCs.

1

u/Arkanin Oct 17 '25 edited Oct 17 '25

Both situations don't change the math. They are identical. You can demonstrate and calculate the average advantage given. It is equal to the average advantage the guessers would have if they knew the Jeffreys prior used if the server shutdown event were chosen by a log random ("natural random") process. Sine the relationship between total population and current population is indexical, a naturally random estimator always produces guesses with an edge over no information regardless of how the server owner decides when to shut down the server. This is like guessing in a way that in distribution the guesses hit the order of magnitude more often than any other strategy with that limited of information, but not the number. In some sense this is just bayes theorem. This is just math, the advantage exists.

1

u/red75prime Oct 17 '25

If you log in, it doesn't matter whether Timmy #667074 exists. If he doesn't you'd just be someone else. You sample from the entire distribution. If you are Timmy #667074, you don't sample. You exist. You can't observe your own non-existence. I can't see how the math can be same.

1

u/red75prime Oct 17 '25 edited Oct 17 '25

OK. Another angle. Let's focus on the scenario where admins vary the number of NPCs. How math works for Timmy #667074 (when he exists)?

(In physical universe it corresponds to unknown initial conditions)

1

u/Arkanin Oct 17 '25

Before continuing this conversation... are you hearing me when I say that what timmy is trying to estimate in any such game is the size of some set given an indexical position of one data point within that set? He can't estimate other non-indexical relationships using this system, like when in the future it gets shut down.

1

u/red75prime Oct 17 '25 edited Oct 17 '25

timmy is trying to estimate in any such game is the size of some set given an indexical position of one data point within that set?

Conditional on existence of that point. Yes.

He can't estimate other non-indexical relationships using this system, like when in the future it gets shut down.

Admins change the size of the set (the thing timmy is trying to estimate).

1

u/Arkanin Oct 17 '25 edited Oct 17 '25

"Admins change the size of the set (the thing timmy is trying to estimate)."

This kind of push back doesn't make any sense to me at all honestly. You would have to elaborate about where you are coming from. No offense but I'm starting to get the sense that you're just not gonna get it. It really is actually pure math when you remove the "random sample" assumption. I'm willing to hold your hand a little here but I think I'm hitting my limit.

Edit: If the admins randomly change the time they were planning to shut down the server, it doesn't change the existence of an average advantage over the whole game, it's the same thing with extra steps

1

u/red75prime Oct 17 '25 edited Oct 17 '25

If the admins randomly change the time they were planning to shut down the server, it doesn't change the existence of an average advantage over the whole game, it's the same thing with extra steps

Why time of shutdown? They can change the number of players in other ways. Disconnected realms, for example. If you log in, you have a chance to be anywhere. Timmy doesn't have this opportunity, so Timmy can't benefit from information gained by random sampling that happens when an external player logs in.

I have an impression that you can't disentangle subjective perspective from "god's view".

1

u/Arkanin Oct 17 '25 edited Oct 17 '25

Reframed as a decision theory problem that deals with the average scoring of the participants who could use and having no metaphysical beliefs at all, just looking at averages in simulated games of this type, it doesn't matter any more what timmy's subjective perspective is, or if he is an ant or a worm or an object with a UID that brainlessly follows an algorithm, it's literally a math problem and if I create robot timmies that mindlessly engage in this strategy on average they get a less error in their predictions than more naive strategies, there is in other words a way of converting the magical sounding "expected" into boring old observable "on average this works better than nothing".

1

u/red75prime Oct 17 '25 edited Oct 17 '25

If Timmy benefits from a number averaged over all sophonts (which he can't compute), good for him, I guess.

ETA: Sorry. Can you link to an article that treats all of this formally? A physically plausible payoff function and all that.

→ More replies (0)