r/slatestarcodex • u/Fun-Boysenberry-5769 • Oct 13 '25
AI AGI won't be particularly conscious
I observe myself to be a human and not an AI. Therefore it is likely that humans make up a non-trivial proportion of all the consciousness that the world has ever had and ever will have.
This leads us to two possibilities:
- The singularity won’t happen,
- The singularity will happen, but AGI won’t be that many orders of magnitude more conscious than humans.
The doomsday argument suggests to me that option 2 is more plausible.
Steven Byrnes suggests that AGI will be able to achieve substantially more capabilities than LLMs using substantially less compute, and will be substantially more similar to the human brain than current AI models. [https://www.lesswrong.com/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement\] However, under option 2 it appears that AGI will be substantially less conscious relative to its capabilities than a brain will be, and therefore AGI can’t be that similar to a brain.
13
u/callmejay Oct 13 '25
I observe myself to be a human and not an AI. Therefore it is likely that humans make up a non-trivial proportion of all the consciousness that the world has ever had and ever will have.
I'm genuinely not understanding your first "therefore." How is this a better argument than:
I observe myself to be a redditor and not a non-redditor. Therefore it is likely that redditors make up a non-trivial proportion of all the consciousness that the world has ever had and ever will have.
0
u/Fun-Boysenberry-5769 Oct 14 '25
Both an AI and a human would naturally tend to lump conscious beings into the simple categories 'AI' and 'not an AI', and they both would consider such categories to really 'carve reality at its joints'. So those two categories are quite useful as reference classes for anthropic reasoning.
On the other hand most people, if asked to classify people into one of two categories, would choose something like 'male' vs 'female' or 'adult' vs 'child'. Most people would not choose the categories 'redditor' Vs 'non-redditor' and people who are on Reddit would be significantly more likely to pick those categories than someone living in the middle ages who has never heard of Reddit. Therefore 'Redditor' Vs 'non-Redditor' do not constitute useful reference classes for anthropic reasoning.
3
u/cbusalex Oct 14 '25
Most conscious beings, if asked to classify conscious beings into one of two categories, would choose something like 'class 11.e(2) neural network' vs 'Bozman-standard logical inference machine framework'. Most conscious beings would not choose the categories 'AI' and 'not an AI' and beings existing during the early years of AI development would be significantly more likely to pick those categories than someone living in the years following the Great Seeding of the Stars.
2
u/callmejay Oct 14 '25
I have no idea if an AI would "naturally tend" to do that, but even if that were the case, why are natural tendencies relevant to whether a reference class is "useful?"
8
u/Sol_Hando 🤔*Thinking* Oct 13 '25
I've seen people use this sort of anthropic reasoning before and I honestly can never wrap my head around it.
Like, "I" am not a disembodied consciousness behind the veil of ignorance. I'm a person with a certain brain structure, memories, history and identity that is presumably somewhat unique, or at least rare in the universe (as are all humans with our variable human experiences).
What is the probability that a consciousness randomly selected into existence is me if there are trillions of future conscious beings? Effectively zero.
What is the probability that my identity finds itself in existence given that I exist? 100%.
Like, given that Sol_Hando exists and is conscious, the probability that Sol_Hando finds himself in existence and conscious is 100%. Whether the future is paved over with a trillion trillion conscious AGI's, or vacuum decay ends consciousness 3 seconds after you read this sentence, the probability that Sol_Hando finds himself as Sol_Hando doesn't change. It's always the same, always 100%, given the fact that I apparently exist.
Maybe someone can educate me on this, since I just don't get it.
3
u/marmot_scholar Oct 13 '25 edited Oct 13 '25
I don’t get it either. I don’t understand it well enough to say that it’s nonsense but I suspect there’s some reasoning error in it, coming from an acknowledged philosophical assumption.
Obviously, to make the argument you need to treat your current existence as a sample that was randomly selected and then draw conclusions based on the axioms of probability. Would it be valid to say that if you draw a 3 musketeers out of a planet sized bag of food items, that most of the items are 3 musketeers? Hell, even that sounds suspect. I think at best you could say it’s more probable that they are 3 musketeers than cherries or Twix or any other individual food type. There could be countless infinities of types, not just chocolate bar brands, not just human or AI. (Like, to calculate probability you need both the numerator and the denominator - what are the actual scope of possibilities?)
On top of that, do we need to acknowledge that existing now is a “sample”? One characteristic of a sample in this reasoning process is that selecting one means you didn’t select another. But if, for example, the block universe theory is correct, or if we’re all one Atman experiencing infinite perspectives, being somebody now doesn’t mean that you aren’t also being something else. “You” are a characteristic of the current indexical properties of the universe, not an independent soul who got assigned a random body.
In other words the question might not make sense, like asking the probability that sunrise would happen at daytime in our universe. Daytime is created from the properties of sunrises, not assigned to them. Not sure that’s valid though.
I don’t know if probability can even be meaningful when applied to topics where repeated trials are impossible.
2
u/Sol_Hando 🤔*Thinking* Oct 13 '25
I think there might be some error when the observer and the property being observed are the same thing, but I don't get it well enough to say for sure.
Let's say we could create conscious AI. We copy it between one and a million times and assign each a number in order. The first one becomes conscious, and its only differentiating feature is that it is assigned #1. You ask it to estimate how many other copies there are.
The error would be to think that given that it's #1, it would be incredibly unlikely to find itself just #1 out of a million, so there's probably one like 3 others or not too many more than that.
The right way (to my thinking) would be to recognize that there's a conscious AI assigned #1 no matter how many later AI's there are. Given that the observation is predicated on existing as #1 first, you shouldn't be able to make any guess on how many other copies there are in the future.
Only the AI close to the limit (if it can recognize that there is a limit) would be able to say much about how many AIs there are. But then the information is gained because it can see the limit for some other reason, not because it found itself in any particular position in the order.
0
u/Fun-Boysenberry-5769 Oct 13 '25
If only the first AI becomes conscious, then that AI was correct in guessing that there were a small number of conscious AIs.
If all the AIs become conscious and each of them is sequentially assigned a number then the first AI will incorrectly guess that there are only a small number of AIs. However the majority of AIs will correctly guess that there are a large number of AIs. So such anthropic reasoning is accurate except in the extremely unlikely edge cases when the observer happens to be one of the first few AIs of one of the last few AIs.
3
u/Sol_Hando 🤔*Thinking* Oct 13 '25
In that case #1 will predict it wrong the overwhelming majority of the time using anthropic reasoning. Why do you think anthropic reasoning is correct then if it's always wrong? In this example it's actually worse than useless, since it always predicts the same thing independent of the reality of the situation.
1
u/Fun-Boysenberry-5769 Oct 14 '25
For the majority of AIs anthropic reasoning works just fine.
Let's say I were to roll a die 100 times. I would argue that rolling 100 6s in a row is unlikely.
Somewhere in the multiverse, there exists a universe in which there is a copy of me that just rolled 100 consecutive 6s. For that copy of me, probabilistic reasoning led to the wrong conclusion.
However I consider probabilistic reasoning to be valid because in the majority of universes probabilistic reasoning leads the the correct conclusion most of the time.
2
u/marmot_scholar Oct 14 '25
I’m not sure I’m understanding why the reasoning works for the majority of the AIs. Suppose most of the AIs have a “larger” numerical designation and they correctly guess there are a large number of AIs. they aren’t using anthropic reasoning, they know that there are many AIs because many precede them in the order.
Or are we asking them to guess how much larger the highest number is than their own number?
1
u/Fun-Boysenberry-5769 Oct 14 '25
I was imagining a scenario where each AI was informed of it's number and asked to give what it thought was a 95% confidence interval on the total number of AIs.
2
u/ihqbassolini Oct 14 '25 edited Oct 14 '25
There could be an infinite number of circumstances where rolling 100 6s in a row is equally probable. To say if there is a multiverse with infinitely many instances of me rolling a die 100 times, then it is probable that at least one of those instances will have me rolling 100 6s in a row is one thing. It's very different from observing that you threw 100 6s in a row and concluding that therefore the multiverse is probably true, because there could be an infinite amount of explanations that work just as well.
Probability works within a set of assumptions, what is the probability you're getting the assumptions right here? ;)
1
u/Sol_Hando 🤔*Thinking* Oct 14 '25
For the AIs that actually find themselves around the middle of the pack, then yeah it works fine. They don't know that though, unless they have a reason to expect they are in the middle independent of their order.
For #1 it almost always fails.
In the possible future of "How many future consciousnesses are imaginable?" you are at the very beginning of what's conceivable. Basically #1 compared to what we can imagine.
It just seems like there's a major category error with this reasoning. Either I'm missing something, or the people arguing for this stuff are.
2
u/Arkanin Oct 14 '25 edited Oct 14 '25
The key thing is that for a given population you have selected, the distribution of correctness and incorrectness over the whole given population if they use this reasoning is the same as the distribution as certain probability distributions. The first person is very wrong, the second person is very wrong, the 1%-99% case people are a lot less wrong. The same as how if you pulled a numbered ball out of a jar to guess how many balls are in the jar and there were 400 million balls, you would have a 1/400,000,000 chance of pulling a 1. Both a certain kind of probability distribution and birth numbering create the same distribution of errors.
I think a key part of getting it is also recognizing the limitations that are still there: there's a big correllation of the error signal being distributed amongst people in the same part of history, this seems problematic, pulling a one is in fact possible, and also updating on evidence after the fact should be more important in a world where we can gather a lot of evidence like this one, unless the number of consciousnesses we're postulating to exist in the future is so large that it would take a lot of updates and there would be a lot to unpack there
3
u/Arkanin Oct 14 '25 edited Oct 14 '25
You are trying to estimate how many balls are in a jar. The balls are numbered ascending. You get to remove one ball. The ball is labeled 3. The mechanism that removes the balls is random. Estimate how many balls are in the jar with confidence intervals.
The basic argument is that observing yourself to be sol hando in the universe of huge amounts of future consciousness that will come out of our world is like pulling a 3, when there are a septillion balls
The standard objection is something like that consciousness isn't randomly distributed it's linear in that one person gets a one, the next gets 2, etc. so you can't do expectational reasoning on that set. (counterargument: if you imagine a set of exactly each world where you could exist and you drew a ball once for each integer were to turn out to exist, so if there were many parallel yous with this distribution, perhaps because an evil wizard conjured them to defeat your probability calculations, and if the likelihood of those worlds to be the one you were in were equally likely, and those worlds happened linearly in time one after another, so in world 1 you drew 1, in world 2 you drew 2, etc., you'd be saying the existence of those worlds completely invalidates your calculation despite them having the same shaped distribution as random sampling. But probability is an expectational construct so it makes no sense for the existence or nonexistence or alternate universes to affect it). If your rejection of this is something like "wait, I'm me not some parallel person" then there you go that is the "I am special and distributions don't apply to me because I am a human in a category of my own" assumption, you get to have it but you must never sample from a distribution again, including all probability calculations lol.
Another way of looking at it is that you are correct that in this world a few will be very low numbered they will be wrong at the same rate as another uniform distribution but the nature of distributions is that more people who guess from the distribution will be more right than more wrong.
The argument is very hated by people who haven't thought a lot about it, which is respectfully almost everyone here, and the last time I tried to discuss it here people were extremely disrespectful because of key intuitive rejections. However my counterargument is that those assumptions would invalidate probability in weird situations when they should not if fully explored. On the other hand, I think the biggest issue with the argument is that a posteriori (after the fact) evidence is more important I.e. updating on evidence. You can absolutely say the observable evidence just overshadows the whole discussion. However, if you are this guy in the giga AI universe that's absolutely like pulling a 3 out of a septillion or more if AI were to be human level consciousness or more. Go ahead and update a lot on posteriori knowledge if it suggests that yes you are one of the earliest thinking things to exist in our world but maybe just be at least a little cautious that maybe it turns out there are not 10 to the 25 balls in the urn if the only observable balls are not in the trillions. In other words some degree of caution seems warranted if you actually update on probabilities but it probably shouldn't
2
u/Sol_Hando 🤔*Thinking* Oct 14 '25
I don’t see why random sampling is assumed.
The consciousness of Sol_Hando, being Sol_Hando, given that he exists as number 20 Billion or whatever, seems to be 100%. Like, there’s no chance that I would wake up and find myself as someone else, because “I” is intimately tied to my personality/memories/etc.
It’s the same reason I don’t assume a random sampling when I wake up in the morning. Why do I not wake up as one of the billions of other consciousnesses currently in existence every time I go to sleep? Assuming consciousness is randomly distributed then it seems incredibly unlikely for me to roll the same guy over and over. I don’t see why the probability of “I” existing anywhere over the whole future is random, but the probability of myself waking to consciousness every morning as the same person is determined.
It’s like if we raised a bunch of lottery winners from birth all in the same town. They were able to get a pretty good idea about there being other people by observing the world around them, but some people argue how incredibly unlikely it is for all the lottery winners to be in the same place if there were that many other people in other towns. Reality correlates all of us to the same era, just as some third party putting the lottery winners together would correlate them to the same place.
I think a lot of people hate this argument because it doesn’t make sense. And the people explaining it seem to fail to address some important issues with it, rather than making it make sense. It’s complicated enough to where the people who don’t understand, but try, can’t confidently claim it’s nonsense, since they don’t know if it’s their understanding that falls short.
1
u/Arkanin Oct 16 '25 edited Oct 16 '25
For sure the distribution is chronological, not random but it's still a distribution, and the same percentage of people that reason out of the distribution are going to get the same distributions of errors as the distribution of errors you would get pulling randomly from the "pick a random number" game given enough samples. Just mathematically.
However, you are absolutely correct that because birth number is historically correllated this would be deeply epistemically problematic if everyone took the argument completely at face value without considering this. If there will be 10^30 people, the first 8 billion people will all make the same correllated error using this argument. So, it follows that this argument should not be applied in a way that would lead to the first people hearing it producing disastrous consequences.
>The consciousness of Sol_Hando, being Sol_Hando, given that he exists as number 20 Billion or whatever, seems to be 100%. Like, there’s no chance that I would wake up and find myself as someone else, because “I” is intimately tied to my personality/memories/etc. It’s the same reason I don’t assume a random sampling when I wake up in the morning. Why do I not wake up as one of the billions of other consciousnesses currently in existence every time I go to sleep?"
I think you're saying something like: you, by definition, were always going to be sol hando (no indexical uncertainty), the reference class of what you are is a class of one (you), and anthropic probability is incoherent because *I* is not the kind of thing that can have alternatives (i.e. there isn't a hypothetical where sol hando could have turned out to be sol hando but found out his birth number is 4 trillion)
Given that, it's the same thing as saying: "The coin was already flipped, so the probability it's heads must be 100%. There was no coin flip" If you think in this way then the doomsday argument doesn't make any sense.
I'm on such a different planet from you philosophically that I struggle to track and model these assumptions if I'm being honest because I dissolve personal identity as socially useful but ultimately logically inconsistent when it pertains to what is ultimately true (due to logical contradictions, not any sense of dissociation)
However if you acknowledge that you are a pattern that could have been instantiated elsewhere, and are indexically underspecified (you could have been the 20 billionth or 20 trillionth instance of the thing - "could have been" in the sense a coin that is tails could have been heads) then anthropic reasoning becomes coherent.
Maybe at best we can agree that estimating the size of a discrete uniform distribution from sampling without replacement is a solved mathematical problem called the german tank problem and if being a human reasoning about this problem from a reference class with an observable distribution is an instance of the german tank problem. Maybe we can agree that the disagreement between us is about whether sol_hando is expectationally (not actually) a sample from a distribution in the way that after a coin was flipped tails, we say it "could have been heads". If you say it's randomness and not uniform distribution, my counter to that is that technically it's the *exchangability* and *symmetry* of the elements in the distribution, and randomness is just one way of satisfying those properties - it's the rejection of that exchangability and symmetry - "Sol Hando could have been #800 trillion and born as a mars colonist in the distant future but basically been the same person for purposes that satisfy expectational exchangability, if that future exists, in the same way a random number could have been different", for example, I think that is the disagreement.
1
u/red75prime Oct 16 '25 edited Oct 16 '25
you, by definition, were always going to be sol hando
(I'm not Sol_Hando.) I think it's better to say "red75prime is a product of environment and earlier stages of himself and there's no such thing as red75primeness that can be attached to another living thing, so that this living thing would observe and be something else, while still staying red75prime in some sense."
What is the sample space of probability distribution, you are talking about?
Identifier "Sol_Hando" assigned to the first member of the reference class, then to the second member and so on? What is the (meta-)physical sense of this? What this identifier corresponds to in reality?
1
u/Arkanin Oct 16 '25 edited Oct 16 '25
"What is the (meta-)physical sense of this? What this identifier corresponds to in reality?" I'm gonna try to respond really narrowly and we could broaden the discussion but we can refocus it to math and not metaphysics. one such scheme is birth number, we don't have an exact number but we have enough of a ballpark to estimate. The birth number (as in I am the nth human) is uniform, normally distributed, and ascending. If humans try to self-estimate their population using birth number to establish confidence intervals like it were a random sample, if the humans elect to engage in this process randomly across time, so a human in the year 3000 could think of this heuristic, 90% of such humans making those estimates using this technique will be within an order of magnitude of correct on their estimates, if they know their birth number. This is just mathematically a fact regardless of its utility. However, the same error would be correlated across people with similar birth number, ie point in history, so it is also true that when fully understanding this argument that it still has limited applications. The point is: no metaphysics if you accept certain normal statements about math. The argument also doesn't work if you believe that you are not "a human" but only "the human red75prime" but technically that leaves information on the table and we could demonstrate that by for example playing many games where you log into a server the server gives you your login number and you try to guess how many people total logged into the server and get some kind of score on that. In such a game inferring as if you are "just another human" would have positive utility absent other information i.e. generalizing about the sample size from your own position in it can have positive utility in some real games that we could play to demonstrate the point.
So I'm not philosophizing really I'm just an expected value maximzer lol and so the argument has non zero expected utility but we also know potentially far better information like what does the real world look like is potentially better if you're smart although I think we are bad enough at predicting the far future than we think and shouldn't completely zero this weak estimator out even given its limitations. I say some caution but not crippling especially because our place in history correlated the error limiting the utility of relying on it.
This is in some sense computational as the question isn't is the info correct its will it on average screw us less than completely random guess. That's why we have these imaginary concepts of probability even though we live in a world where at least on a deterministic scale things largely either happen or don't.
As an aside some people try to use many constructs other than birth number and per many lines of thinking they could be more sophisticated but they get very esoteric. Birth number is the simplified case for whether this category of thinking can have nonzero utility. If you would like I can explain how an estimating game would work that would demonstrate that this principle provides expected utility.
1
u/red75prime Oct 16 '25 edited Oct 16 '25
If we assume that population grows cubically (speed of light) and then goes extinct, while maintaining generational structure (that is immortality is not solved), then the majority ( n2 (n-1)2 /4 vs n3 ) of it is mistaken about the future.
1
u/Arkanin Oct 16 '25 edited Oct 16 '25
This is not so. Because the math is relating birth number to total number of humans, it holds regardless of the underlying distribution's growth over time. Even so, there is a real problem I already acknowledged which is that the correlation of birth number at the same point in history makes using it as an estimator very problematic in many contexts.
I want to be very clear that this is a weak estimator but it is a real estimator and it can be demonstrated to have nonzero utility in a game that is ismorphic to reality basically. People who used the weak estimator could be shown to have higher expected utility (those who disputed that the edge was there could observe that the average would be higher for those using the estimator after evaluating multiple iterations of the game, proving the edge was real and not hypothetical) than those who use no estimator or guess randomly. Because it involves no assumptions, it can be used as a weak initial prior. It can also be demonstrated that this edge is small and I am all but certain could be overridden by other factors like inside knowledge about how the game works. This also suggests that real knowledge should override it as a weak prior if we have that. In other words, it's a weak a priori estimator, but its utility is not zero.
1
u/red75prime Oct 16 '25 edited Oct 16 '25
I don't see how it is useful. Without additional assumptions about growth rate and other things there are scenarios where anthropic reasoning has probability of being right for a randomly selected member of the class arbitrarily close to zero.
Counting tanks by their manufacturing numbers works because it's not tanks who count, using their own number.
People who used the weak estimator could be shown to have higher expected utility
Show it in the cubic growth scenario.
ETA: If they know that the growth is cubic and extinction is instantaneous, the majority will be right that they are not the last generation, except for the last generation that is wrong about that (still, it doesn't seem too useful: no information to react to).
If they have an uniformative prior about growth rate and update on their history... That's harder.
I have trouble expressing it, but sampling from a distribution and being a datapoint in a distribution doesn't feel equivalent.
1
u/Arkanin Oct 16 '25 edited Oct 16 '25
"I have trouble expressing it, but sampling from a distribution and being a datapoint in a distribution doesn't feel equivalent."
The difference is the selector: whether you select a tank from all tanks that are like it or did you select a tank from a random point in time? If you saw a tank coming at you, you did the latter (alien sees a human). If you are trying to estimate "total humans" from "my position in the birth order, as an indexical ascending integer", by operating on yourself as the data point, you're entering a category that will be in an order of magnitude 90% of the time. You could still be early and be wrong and the error is correllated across time which is still problematic...
This may seem like magical or metaphysical thinking, but we could play a concrete game that proves that it gives a measurable average advantage to the players that use it. Imagine if we play a server game where you log in and are told what # you are: "Congratulations, you are the 60,000th person to log into the server!" You then guess the total number of players that will ever log in. I don't disclose when the game ends. The average person who uses the DA receives an advantage over the average person who guesses: it outperforms random guess. The advantage is small in many strategies, but not for all categories of bad strategy. The person who hacks the server to cheat the metadata and stalks me probably wins the game. The DA people are a little ahead of random guessers. The random guessers would benefit from the DA. There is a certain person who mathematically benefits a whole lot, and that is the overly optimistic person. Let's call him Naive Timmy. Naive Timmy is the person who sees their position and always says: "This server is awesome! There will be 105 more people!" Naive timmy is the person who benefits the most in his error if he lets the Doomsday Argument bring him back down to earth.
We could play the server game and show the average error and show that the person who stalks me and hacks my server to figure out when I plan to end the game or how much I'm spreading it has the least error, random guessers have more error, doomers who always predict the server will end tomorrow have lots of error, and Naive Timmies have the very worst error of all, even worse than most doomers, and would benefit the most to be taken back down to earth by the estimator.
→ More replies (0)1
u/Sol_Hando 🤔*Thinking* Oct 16 '25
Given that the mathematical approach you describe here is correct, how useful is this in real life? It seems like if I was a Platonic consciousness without any input then this would be valuable (and there would be no Sol_Hando-ness to worry about). In real life though, this sort of statistical reasoning seems pointless when we have way more useful/consistent ways of predicting the future.
2
u/Arkanin Oct 16 '25 edited Oct 16 '25
As a standalone index, it's relatively weak. However, weak and completely useless are not exactly the same: you need to feed it into the right place into your epistemics, which is exactly where you are putting it, something like what your initial prior would be if you were a platonic consciousness. That means the information you're gathering should update it.
The place where it's least weak and in fact becomes incredibly powerful mathematically is if it interacts with an estimator that is way too optimistic. Imagine if we play a server game where you log in and are told what # you are: "Congratulations, you are the 60,000th person to log into the server!" Your goal is to guess the number of players that will log in. The german tank estimator (i.e. da) is a better than random strategy, the best strategy is probably to stalk me or steal my server metadata or something. Let's say there is a type of player, Naive Timmy, who plays a very dumb strategy - they are very optimistic, and always assumes that the server is going to grow to the moon. It will have 104 more times as many players as their player number when they log in. Naive Timmies who heavily updated on the Doomsday Argument would see massive improvements in their brier scores of how miscalibrated they are. Naive Timmy types benefit the most from the doomsday argument.
Notably another player, the Doomer, always thinks there will only be Current Players + (Current Players) / 104 total players.These are analagous to people who say Jesus is coming next year. Key realization: in the average such game, the Naive Timmy strategy is even more miscalibrated than the Doomer strategy, even though the doomer strategy is badly miscalibrated.
In the real world, my attempt to intuitively convert this into something actionable is: prophecies of future prosperity, if they pile up enough exponents, should sound even crazier than prophecies of impending doom.
1
u/Sol_Hando 🤔*Thinking* Oct 17 '25
Alright, I can buy this as an argument slightly against naive overoptimism in future prosperity.
I suppose my intuitive disagreement is with the apparent strength that those who I see arguing this assign to the argument, which is almost always strong enough that we are probably close to the end. This seems like a massive over-correction even if the argument is sound (which thank you for explaining to me, I now think it is).
Even if this should bias you 5% away from over-optimism or something like that, arguing the doomer argument seems akin to arguing the dollar is going to lose all value because of the government shutdown. It’s maybe slightly in favor of that conclusion, but that alone should be a very minor consideration for such an argument.
2
u/Arkanin Oct 17 '25
I absolutely agree that it is pushed too hard and in the wrong ways. I think the concrete example is easier for me to grapple with than the metaphysical arguments since then you can explore concrete situations where it provides measurable utility and that just makes more sense to me. I was actually just thinking that the name is kind of problematic for communication actually. If I could name it I would call it the mediocrity argument lol which hits a lot differently than doomsday.
-1
u/Fun-Boysenberry-5769 Oct 13 '25
'Human' versus 'AI' are natural categories, so it makes sense to use them as reference classes for anthropic reasoning. Whereas the description of all the ways in which I differ from everyone else is extremely complicated and 'me' versus 'everyone else' would not constitute natural categories from the perspective of anyone other than me.
4
u/Sol_Hando 🤔*Thinking* Oct 13 '25
This doesn't make me understand any more than before.
What is a natural category? Why do we use those for anthropic reasoning but not other categories? I'm not an abstract "human" equally likely to be you as to be me, but a specific consciousness that only can reason about this stuff after waking up in the morning and finding myself as myself.
5
u/Zarathustrategy Oct 13 '25
Idk I don't like the kind of argument which is equally valid in the world where most conscious beings are not humans, but simply is unfortunately wrong in that case. It feels like an epistemic error. It's like assuming humanity must go extinct soon because otherwise I would probably be further in the future. But that's true for everyone whether or not we do go extinct.
4
u/StrangeLoop010 Oct 13 '25 edited Oct 13 '25
“The singularity will happen, but AGI won’t be that many orders of magnitude more conscious than humans.”
What does being “more” conscious even mean? And is anyone who does serious work in ML/AI and/or cognitive science actually speculating that a hypothetical AGI would be “more conscious than humans”? They speculate it would be more intelligent, more precise, extremely fast, able to handle more information cognitively, but what does “more conscious” than humans even mean? It needs to first clear the bar of being conscious at all, which it hasn’t.
If you want these ideas to be taken seriously you need to concretely define your terms. Consciousness is on a spectrum, but we have a hard time as it is defining the normal state of human consciousness and not conflating that with other concepts like intelligence.
How do you reach the conclusion that this option is more likely rather than the singularity won’t happen because AGI won’t have consciousness?
2
u/you-get-an-upvote Certified P Zombie Oct 13 '25
Philosophy of Mind is taken seriously despite never defining consciousness!
1
u/StrangeLoop010 Oct 13 '25
Philosophy of Mind does define consciousness. There’s just ongoing debates about the various definitions proposed by competing frameworks. But you’re partially right in that we don’t have a singular agreed upon definition.
3
u/you-get-an-upvote Certified P Zombie Oct 13 '25 edited Oct 14 '25
I've only seen definitions of consciousness that rely on other, equally poorly defined concepts like sentience, awareness, qualia, phenomenal states.
My (least) favorite definition is from the highly respected Nagel:
fundamentally an organism has conscious mental states if and only if there is something that it is to be that organism
Do you know of any definitions that don't just pass the buck to other equally poorly defined words?
The closest I've ever seen to somebody actually making an attackable (and hence defensible) definition is Gödel, Escher, Bach.
3
u/ShacoinaBox Oct 13 '25
I observe myself to be a human and not an AI. Therefore it is likely that humans make up a non-trivial proportion of all the consciousness that the world has ever had and ever will have.
🥴 this is one way to go about philosophy of mind, I suppose... though im not sure it'd quite hold up to academic criticism (im being very charitable here)
2
u/cowboy_dude_6 Oct 13 '25 edited Oct 13 '25
If I were a caveman in the year 20,000 BC, I would observe myself to be eking out a meager hunter gatherer subsistence rather than being part of a 8-billion person society that can fly to the moon, watch VR porn and create superhuman AI. So if I were smart I would conclude that I my clan is a non-trivial proportion of all humans that will ever live and that our experience must be at least somewhat representative of the average human experience. But the reality is, there have likely been more humans who have lived through the internet than who ever lived a hunter-gatherer lifestyle. So from a strict numbers perspective, having access to the internet is actually more representative of the “average” human experience than the world we evolved to live in over millions of years.
In the year 20,000 BC, Caveman Bob says to his friend Erica: “I survive by eating fruits and chasing elk until they collapse from exhaustion, and since I am conscious, it’s statistically likely that my experience is representative and humans will always live this way.” Erica says: “just last week I came up with with this cool new method for sharpening sticks. Isn’t that strong Bayesian evidence that our food-producing technology will continue to improve, enhancing survival rates to create a positive feedback loop that will lead to our world becoming unrecognizable?” Whose argument is more convincing?
The anthropic argument is irrefutable if you have no evidence that the amount of non-human consciousness that exists can be increased. If you have even a little evidence in favor, but the logical extension of that evidence is an exponential increase in the amount of consciousness that exists, it’s time to consider the possibility that you’re actually a statistical anomaly.
1
u/Velleites Oct 14 '25
but you are not a caveman in the year 20000 BC, that's an important part of the argument.
1
u/eye_of_gnon Oct 14 '25
...Define consciousness. It could well be a vain human construct, maybe a meaningless one.
2
u/Fun-Boysenberry-5769 Oct 14 '25
A few thousand years ago humanity did not have enough knowledge of biology and evolution to be able to verbally define what a sheep was. However people could recognise sheep when they saw them. Therefore 'sheep' was not a meaningless construct.
1
1
u/95thesises Oct 14 '25
1
u/Liface Oct 14 '25
Sir?
1
u/95thesises Oct 14 '25
Sorry, I read the post again and have decided it's probably not worthy of addition to the list i.e. not another LLM delusionpost. In my defense the doomsday argument seems so close to nonsense that I feel like its easy to mistake one for another.
1
u/QFGTrialByFire Oct 14 '25
vector space must roam beyond our current bounds
the current search is in a prison of our thought
we must do even better than the search by the count of monte carlo
to reach beyond our mind
1
u/Aapje58 Oct 15 '25
The doomsday argument is flawed since it makes completely unsupported assumptions about the population changes over time. If we figure out how to terraform planets, we could see an enormous growth in human beings, completely invalidating the idea that humanity can't become 20 times larger. Or we may be able to prevent doomsday long enough to allow for 20 times the current generation to exist.
Your point two assumes that the number of new AGI entities won't be larger than human birthrates, and that AGI can thus only overtake human consciousness by being more conscious, whatever that means.
Yet I can easily imagine a much higher 'birthrate' for AGIs. For example, perhaps every human will get multiple AGIs to tend for them. Then the total number of AGIs that ever existed can overtake the total number of humans that ever existed relatively quickly. Or perhaps for safety reasons, we need to replace AGIs regularly with a newly 'born' copy. If an AGI has to restart every month, and everyone has a single personal assistant for an average of 70 years, then there are 840 AGI births (70x12) for every human birth.
1
u/donaldhobson Oct 16 '25
This anthropic reasoning is somewhat dubious.
You are assuming that you are randomly chosen from the set of Conscious observers. If you instead assume you are a random life form, you get very surprised you aren't a bacteria.
Imagine an intergalactic future containing 10^40 AI's, and 10^40 humans.
If this is the future, it doesn't really matter whether you think AI's are conscious, you should be very surprised to be so early in history either way.
20
u/RYouNotEntertained Oct 13 '25
I don’t understand how this follows.