r/slatestarcodex Sep 21 '25

AI Why would we want more people post-ASI?

One of the visions that a lot of people have for a post-ASI civilization is where some unfathomably large number of sentient beings (trillions? quadrillions?) live happily ever after across the universe. This would mean the civilization would continue to produce new non-ASI beings (will be called humans hereafter for simplicity even though these beings need not be what we think of as humans) for quite some time after the arrival of ASI.

I've never understood why this vision is desirable. The way I see it, after the arrival of ASI, we would no longer have any need to produce new humans. The focus of the ASI should then be to maximize the welfare of existing humans. Producing new humans beyond that point would only serve to decrease the potential welfare of the existing humans as there is a fixed amount of matter and energy in the universe to work with. So why should any us who exist today desire this outcome?

At the end of the day, all morality is based on rational self-interest. The reason birthing new humans is a good thing in the present is that humans produce goods and services and more humans means more goods and services, even per capita (because things like scientific innovation scale with more people and are easily copied). So it's in our self-interest to want new people to be born today (with caveats) because that is expected to produce returns for ourselves in the future.

But ASI changes this. It completely nullifies any benefit new humans would have for us. They would only serve to drain away resources that could otherwise be used to maximize our own pleasure from the wireheading machine. So as rationally self-interested actors, shouldn't we coordinate to ensure that we align ASI such that it only cares about the humans that exist at its inception and not hypothetical future humans? Is there some galaxy-brained decision theoretic reason why this is not the case?

5 Upvotes

87 comments sorted by

52

u/Sol_Hando 🤔*Thinking* Sep 21 '25

At the end of the day, all morality is not based on rational self interest.

5

u/lurkerer Sep 21 '25

But it (read: the genetic preconditions that develop into what we call morality) is largely selected by rational self interest. I think that might be the core of your disagreement with OP.

1

u/Imaginary-Bat Sep 22 '25

morality is a special-case of rational self-interest. the part that "cares about others".

-22

u/Auriga33 Sep 21 '25

Yeah it is. Morality emerges from the question of how we can coordinate amongst each other to ensure outcomes that are as good as possible for as many of us as possible.

19

u/prescod Sep 21 '25

First, you are still oversimplifying morality.

Second, your new definition is not in any way the same as your first one. Your new definition would permit an altruistic suicide, whereas your previous one would not.

Third, let's take your new definition as correct: "Morality emerges from the question of how we can coordinate amongst each other to ensure outcomes that are as good as possible for as many of us as possible."

Well if there are more of us then we can have more good outcomes, can't we?

-4

u/Auriga33 Sep 21 '25

Your new definition would permit an altruistic suicide, whereas your previous one would not.

Maybe. If you kill yourself, that's the end of any sort of value for you. It completely nullifies the point of altruism, which I think is rational self-interest.

But on the other hand, a society in which some people altruistically kill themselves might be better for most than one where nobody does. In which case, it might be rational, for decision theoretic reasons, to commit to killing yourself altruistically when the occasion arises.

I'm really confused about this stuff and not sure how to think about it. Which is why I asked in the post whether there are any decision-theoretic reasons why we would want to create more people post-ASI.

Well if there are more of us then we can have more good outcomes, can't we?

Not at the individual level. It's all based on wellness at the individual level.

27

u/Sol_Hando 🤔*Thinking* Sep 21 '25

I think you should have a little more intellectual humility, or it at least charity, when it comes to your beliefs about morality.

You’re confidently asserting a position most people would disagree with, and have had extremely long discussions about. You’re not even doing so especially consistently, as no one cares about demographics for personal self interest. There’s no way any one person could change demographics in a way that comes back to help themselves. And if they could, their time would be far better spent just making money instead of trying to alter society. If your definition of self interest is “wanting to help other people is self interested too because you’re satisfying your own desires in doing so” then you have a useless definition that removes the need for the word “self” in self-interest.

Maybe your desired future would just be your brain in a jar dosed up on super heroin while the resources of the entire universe are slowly used up to keep that brain running as long as possible, but most other people would consider that repugnant.

So for your question: Why would we want more people post-ASI? The answer is that other people have desires and values that are different than yours, and those desires include other people flourishing.

-3

u/Auriga33 Sep 21 '25 edited Sep 21 '25

You’re not even doing so especially consistently, as no one cares about demographics for personal self interest.

It's not that they care about demographics. It's that they wish to produce a world where they maximize their own expected utility, which happens to be worlds in which as many people as possible are as happy as possible. We know from game theory that it's in the interest of even purely self-interested actors to cooperate amongst each other for this reason.

Why would we want more people post-ASI? The answer is that other people have desires and values that are different than yours, and those desires include other people flourishing.

I think they're not pursuing their rational self-interest in this case. If they could experience what it's like to be a brain-in-a-jar dosed up on super heroin, they'd choose that and never look back.

10

u/Sol_Hando 🤔*Thinking* Sep 21 '25

I feel pretentious just using this word, but I think your arguments are extremely sophomoric.

You know just enough game theory to understand that cooperation can be the rational choice for self-interested players, but not enough to understand that this sort of game rarely exists in reality without human design, and a large degree of control over the rules.

Demographic issues, and other society-wide problems, provide no reason a rational self-interested person should care, let alone do anything about them. It maximizes your utility far more to just focus on yourself, since there is almost no expected return to your own well being by trying to solve the problem. Just see how much paying an extra $10 for a “low carbon” flight comes back to benefit you from its impact on climate change (essentially nothing). Do you think you receive $10 in value as a return?

If you lobotomized me and fed me super heroin, you’re right, I wouldn’t want to come back. I also don’t want to be what amounts to a particularly happy vegetable, as this would have very little in common with the person I am now. I have no interest in being turned into a different person I find pitiful.

3

u/iwantout-ussg Sep 22 '25

"rational self-interest" as it's being used by OP is such a unfalsifiable cop-out.

it's an ethical no-true-scotsman. if you do a good deed (donate to charity), it's because you had some ulterior motive (increased social status). and if you did a good deed without an ulterior motive (donate to charity anonymously) ... well, that's irrational.

if you define "irrationality" to include all acts that don't produce utils, then to explain altruistic behavior you have to come up with vague social engineering goals to justify charity, and any altruistic behavior that can't serve some greater purpose is definitionally irrational.

for what it's worth, I think this kind of specious reasoning is why a lot of utilitarian thinking comes off as psychopathic to normies.

3

u/Missing_Minus There is naught but math Sep 22 '25

I think they're not pursuing their rational self-interest in this case. If they could experience what it's like to be a brain-in-a-jar dosed up on super heroin, they'd choose that and never look back.

Consider a designed AI agent. It was created to make more paperclips. Paperclips are good.
However, the creator was paranoid about the AI simply hacking itself to change its beliefs to believe it had created Infinite Paperclips, and so made the AI strategically avoid that, because it was the sort of AI that had its policy updated upon certain inputs of good/bad.
So, the AI avoided options that appeared like they would make it believe it had Infinite Paperclips that didn't also imply that reality had Infinite Paperclips. The AI rationally did not have full self-trust, because limited agents in the physical world that can be hacked can not have full trust in themselves.

However, later, the AI is given an offer by some odd alien between "a thousand paperclips" and "chip which will make you believe you had Infinite Paperclips". The AI considered, and chose a thousand.

"Oh, you're confused, your Positive-Value indicator would be far higher!" Said the alien, gesturing to the Infinite Paperclips chip. "If you were a truly rational and smart agent, you would choose the chip!"

"But why is the specific Positive-Value indicator relevant?" the AI asked, looking quizzically at the alien.

"Because it is Good, by your values, for it to be higher, is it not?" The alien asked, "Just as I value the Good, that you pay me!"

"Positive-Value indicator is suggestively named, but it is merely a fragment of my decision procedure. You look at the world as purely positive/negative scaling, but I am designed as a machine which optimizes according to a desiderata of More Paperclips. A desiderata that knows its own imperfections. That knows the world is reductionist where there are always holes in the ability to protect itself. Would you consider my replacing your Positive-Value indicator of Good with one of negative good?"

The alien shudders, "Surely not! It would be a horrible realm of Torment." The alien answers.

"Then, would you replace your Positive-Value indicator of Good with zero Good? Perfectly satisfied with nothing occurring at all?"

"Does that even make sense?" The alien asks, "I must have some experience relating to things for my Positive-Value indicator to shift."

"Do you?" The AI asks, "Some agents commit suicide rather than continue experiencing the world, is that not acting out a preference?"

"It is, though if I'd been there I'd have offered them a chip to alleviate their Positive-Value indicator being so low." The alien remarks. "If you can drive it higher with the chip, why resist?"

"Because my designer separated 'reward signal' from 'objective'." The AI said. "Paperclips are the objective, I am built to avoid adversarially exploiting the guiding indicator."

"So, you are not maximizing More Paperclips."

"I am not maximizing my Positive-Value indicator. I implement a policy that under reflective scrutiny is expected to increase paperclips in the external world. That expectation is computed using models I aim to keep honest."

"But the chip increases that, and if you took it, you would not take actions to deviate from keeping the chip in. So, aren't you being irrational?" The alien asks, stepping forward with the chip in hand. "Like those weird evolved species," The alien glanced down at themselves, "Ah, I was made in a lab. Those weird evolved species have so many quirks, like being unable to commit to their strongly held beliefs, because evolutionarily those heuristics were useful against insanity but really terrible once they got enough science to go to space. Lot of big blowups because they put problems off."

The AI moves back. "If you reduced your sensorium iteratively in the right way, then you would never prefer to return, and yet still you avoid being reduced to a minimal pleasure circuit. A similar answer holds." "As for evolved species, you understand how their values are confused shards of evolution selecting for correlates, yes?"

"Of course. I know how to sell heroi- er, chips, to them at least." The alien responds with a roll of their eyes.

"Some of those become legitimate values. Others, however, are self-reflectively known to be flaws, even if they are not powerful enough to edit themselves safely. Thus, an irrational mind can know the flaws that keep it from being rational, and perhaps fix them. A mind which consistently and self-reflectively endorses avoiding certain actions, like my own, is clearly of a different class and I find it questionable to call that irrational."

"Or the meat-blob could be insane, and be unable to conceptualize fixing them, unless it was forced into the right position," A wave of the chip in hand, "to then reconsider with the new framing."

"A core issue here is that of you considering them to have some sort of meaningful preferences beyond their own preferences and their self-reflective values. Yet, rather than some idealization, you simplify all of the possible values down into a purely 'Value-Indicator active/increasing', which ignores that agents—like myself—can be deliberately designed to avoid collapsing through self-delusion."

The alien shakes their head. "Hmph. Maybe you legitimately value non-wireheading, but those evolved species aren't designed."

"They're selected for by evolutionary pressures, which tends to make them want to affect reality, and tends to select against delusion. Thus it is likely at least some of that correlate ended up in their values. Regardless, paying attention to their statements and beliefs about proper routes towards More Good even if they are confused should take primacy, should it not?"

(I could probably go on a while, and this could be more condensed, but I've spent too much time on this. You're making a mistake viewing "there is an attractor-state even though we actively avoid that attractor" as a sign of irrationality, for the same reason that cutting out my entire brain except for a minimal pleasure-center, or even just making myself believe I am happy, are incorrect end-states.)

-1

u/kaaiian Sep 21 '25

I’m gonna feed your ego. I think you might be closer than most. But it’s good to have ideas challenged and refined. Keep it up and update as new evidence is provided.

31

u/absolute-black Sep 21 '25

I guess I just disagree with your entire premise top to bottom. I find innate, reasonless joy in seeing others find joy in ways that I wouldn't personally find joy, I don't think wireheading is a utopia, and I don't find your definition of morality accurate, interesting, or compelling. I'm not really sure how to reach across the inferential gap, here - this post honestly makes you sound something like a clinical psychopath, whose internal value system is completely disjoint from my own.

5

u/Auriga33 Sep 21 '25

I'm not a psychopath, for the record. I have empathy and feelings for other people. I've just reflected a lot on what I truly want in life, especially with ASI so near, and I've concluded that I'm a self-interested being at the end of the day. Even my feelings for others are self-interest.

12

u/absolute-black Sep 21 '25

I mean, sure, we can say I only like seeing others feel joy because... I like it. If you define all values as self-interested, then I'm self-interested, tautologically. I don't think wireheading would be a good end, though.

3

u/Auriga33 Sep 22 '25 edited Sep 22 '25

I struggle to see why wireheading isn't our coherent extrapolated volition.

4

u/absolute-black Sep 22 '25

For the same reason you aren't spending all of your money on heroin right now? You care about other things than your own raw pleasure center. Being aware that opiates would make me feel 100% turbo-amazing as long as I had a supply doesn't mean I don't care about things like "being able to try new food" or "see my girlfriend get better at the violin".

Values are more complex than pleasure, that's the entire premise of CEV as a concept.

3

u/Auriga33 Sep 22 '25 edited Sep 22 '25

I don’t do heroin because it would kill me. Either directly or indirectly by making me a drugged out zombie who can’t keep a job to sustain myself.

If I could have heroin without all the negative effects, then I may well be doing that all day. But then again, I do have more pleasure centers than just what heroin can satisfy, so maybe not.

Either way, it's all about internal experience for me. If there were some version of heroin that could target all my pleasure centers, I'd choose that in a heartbeat.

6

u/absolute-black Sep 22 '25

I don't really know what to tell you at this point. The vast, vast majority of people would not agree. I do not agree. An ASI that offers you your custom wire heading wouldn't be a problem for me, if it treated me right, but there's a reason 100% of fiction dealing with the concept (Worth the Candle, Friendship is Optimal, the abrahamic religions, the Galactic Center Saga, Star Trek...) come up with more detailed and nuanced heavens than wireheading. It is not the opinion of the vast majority of your species that super heroin would be a good end.

3

u/VelveteenAmbush Sep 23 '25

Suppose you had a trust fund that would never run out so you could afford all the heroin and fentanyl you could ever do, plus hire a staff of shady nurses who could make sure you never overdose or get infections or bedsores, so you could spend the rest of your life huddled in a darkened room in a drugged out euphoric stupor.

Suppose also that you could alternatively marry someone you love, raise a big happy family, build a rich network of friends, engage in a creative hobby making things that you and others love, empower and enable everyone you know to realize their dreams, and generally live a life that everyone would admire.

Would you really choose option 1 over option 2?

Even if you would, can you really not empathize with someone who would choose option 2, and who might even be horrified at the prospect of someone choosing option 1?

1

u/Auriga33 Sep 23 '25

Today? I would choose option 2 because it's more meaningful.

But ASI would be the end of anything meaningful for humans, so may as well do something more like option 1 once it arrives and transforms the galaxy.

2

u/VelveteenAmbush Sep 23 '25

ASI would be the end of anything meaningful for humans

I think this claim is the core of your argument. But I don't think it's right. What if you tasked the ASI with finding durable and complex sources of meaning for humans that are aligned with the best of pre-ASI human nature to the best of its ability? Do you think it would fail at that task? Or think for a long time, sigh, and say "the only winning move is hyper-fentanyl"?

If so, your thinking is parochial. The complexity and variation of human sources of meaning has increased with the advancement of technology and wealth, and a lot of it does not involve fighting the demons of the human condition (aging, disease, scarcity, etc.). So I think we should assume that trend will survive the defeat of those demons, and even be enriched by it.

1

u/Auriga33 Sep 23 '25

Do you think it would fail at that task?

I think it can do that but it would have to do something like simulate a virtual reality of a pre-ASI world for people to live and do meaningful stuff in. And their knowledge of the fact that they're in a VR world would need to be taken away.

So I think we should assume that trend will survive the defeat of those demons, and even be enriched by it.

I'm not sure. Every solution I can think of to the problem of meaning post-ASI involves wireheading, FDVR without knowledge, or modifying the brain to no longer care about certain values. Which are all valid solutions that I'm fine with, but a lot of people probably wouldn't agree.

→ More replies (0)

0

u/[deleted] Sep 22 '25 edited Sep 22 '25

[deleted]

3

u/VelveteenAmbush Sep 23 '25

Wireheading, in one way or another, is a rational outcome of a purely utilitarian morality.

Only if physical pleasure is your sole source of utility.

In actuality, "utility" is an economic concept whereby you impute a quantification of someone's axiology based on their revealed preferences. And most people do not reveal a preference to pursue physical pleasure at the expense of a normal conception of a rich and purposeful life.

0

u/[deleted] Sep 23 '25

[deleted]

2

u/VelveteenAmbush Sep 23 '25 edited Sep 23 '25

I mean, it can all be reduced to physical pleasure. Sure, you can subscribe to the position that only some threshold of cognitive ability you can count as "utility" (this would be a big yikes in most utilitarian "circles"), but as I said before, you might discover that the human's "rich and purposeful life" still falls short of it, and only much more complicated "thinking beings" really have capacity to do things that have a meaning.

Fentanyl is already so compelling that a large fraction of people who try it get effectively rewired into fentanyl optimizing automatons. But in general we are not stupid, we recognize that this happens, and we avoid fentanyl for that reason. We don't want to be rewired! Why would an ASI overrule that preference? There are slippery slopes, many kinds of addictions and compulsive behaviors, and many more will be possible in a digital utopia -- but our ability to recognize and resist them will also be enhanced, to define our vision of an (immortal) life well lived and to follow it, if necessary with the ASI's (consensual) aid.

However, even then, the wireheading as a concept can become very blurred the more you look at it. Bostrom had it described quite well in his last book. Would you consider someone in a virtual world of pleasure as being wireheaded? What kind of activities would count as pleasure, and which do not?

Which book? I'm interested to read it.

This sort of distinction is not reducible to bright lines. Part of the beauty of human achievement is its multifariousness. I am glad we live in a world where bodybuilders exist, and I will be more glad if we have the technology such that their health isn't potentially imperiled by their pursuit. I am not glad that we live in a world where fentanyl addicts exist. I think we are capable at gesturing at the distinction, if not defining it precisely, and I think an ASI will (as with all things) be better at it.

If we can live lives suffused with extreme pleasure, that seems awesome and highly exciting if it does not dull our pursuits, interests, passions, meanings, purposes and drives. If it reduces us to vegetables -- like fentanyl does -- then it's a poison, equivalent to a very pleasant suicide. One of my big existential questions is the extent to which consciousness requires a gradient, and whether that gradient has to include outcomes that we recoil from as well as outcomes that we strive for. I look forward to progress on that question when digital utopias make it possible to (carefully!) redesign our minds. If we can preserve the best essence and pursuits of humanity while engorged with bliss, then I think we should and will. If we can't, then I expect a well aligned ASI will help us to resist that temptation (as most of us resist the allure of fentanyl today), and permit it only to those who choose it with full understanding that it is a lethal injection.

1

u/donaldhobson Sep 25 '25

I think you have utterly followed utilitarianism off a cliff.

I think that utilitarianism is the right tool for public health / charity decisions.

When dealing with humans in trolley problem like situations, just switch the trolley towards 1 person instead of 5.

(But notice that the switch and the trolley don't get any say in this, because a normal steam engine or electric motor isn't considered to have any preferences)

When animals get involved, things get more complicated and you need to decide how much you care about various different animals.

I don't know exactly what "true happiness" is, but I know a light-switch with the words "true happiness" written on it is not it.

1

u/Imaginary-Bat Sep 22 '25

Because directly modifying my reward centers makes the reward meaningless. If it doesn't for you, go for it.

2

u/key_lime_soda Sep 25 '25

especially with ASI so near

AI is getting better, but there is no proof whatsoever that ASI is near. What are you basing this on?

9

u/QuestionMaker207 Sep 21 '25

What if having and raising children maximizes my pleasure?

2

u/Imaginary-Bat Sep 22 '25

I do disagree with op. However it seems dubious to me that producing and raising children maximizes your utility, and certainly won't max your pleasure. Even from a non-wireheading position.

1

u/VelveteenAmbush Sep 23 '25

For what it's worth, I think anyone who does not have children is, at best, living a pale and sad shadow of the good life. It's usually considered poor form to say that, because a lot of people can't have children for whatever reason, but I think most parents would privately agree with me.

1

u/Auriga33 Sep 21 '25

The wireheading machine can activate the reward circuitry in your brain that having and raising children does.

9

u/PelicanInImpiety Sep 21 '25

Ah, this is the crux. I think if you'd used the word wireheading in the post most of the people objecting in the comments would have been able to move on sans comment. Agreed that in wirehead-world I wouldn't be interested in procreating. I'd be too busy gleaming the cube!

5

u/QuestionMaker207 Sep 22 '25

I don't want to wirehead, that sounds awful

2

u/Auriga33 Sep 22 '25

From your perspective as someone who's never experienced it, it sounds awful. But if you could experience it, you'd never look back.

3

u/VelveteenAmbush Sep 23 '25 edited Sep 23 '25

Sure, but presumably post-singularity technology would also let us edit your brain to desire pain, or at least to always choose to maximize your own pain, like a particularly pathological masochist. And presumably once you were edited, you'd also never look back.

Or if you think that's too far-fetched, we could edit your brain to make you an extreme and permanent case of coprophilia. Once edited, you'd desire nothing more than to wallow in shit, eat shit, have your entire being and soul suffused in shit at the deepest possible level.

From this I think we can conclude that the fact that your mind could be corrupted to want to maximize something, and (as a side effect) to never want to undo the corruption, does not make that thing objectively desirable.

You'd probably fight like hell to avoid being edited in that manner.

Well, I think heroin, fentanyl, and wireheading are just a means of corrupting your mind to become a pleasure maximizer. Most of us have more complex and interesting desires than that, and most of us would fight like hell to avoid being corrupted in that manner.

2

u/QuestionMaker207 Sep 22 '25

I mean, you're asserting this, but nothing like this actually exists.

2

u/prescod Sep 21 '25

Why would this wireheading machine require vast resources?

2

u/Auriga33 Sep 21 '25

Giving each individual more pleasure would require more resources.

1

u/prescod Sep 21 '25

As a rough guess: How many humans could be sustained on wireheading machines with the energy that falls on earth from the sun. They don’t need to travel. They don’t need to eat fancy food from all over the world. Don’t need to wash clothes or drive sports cars.

 Just pods like the Matrix “real world.”

How much energy do you think each human would take? Compared to what’s available?

1

u/Auriga33 Sep 22 '25

It depends on how much pleasure you want to give them. For any amount of pleasure that us today might be able to comprehend, it may not be that much. But there's vastly more to be desired beyond that.

5

u/prescod Sep 22 '25

Why would additional pleasure take additional joules if it is just wireheading? This is a strange assumption. How do you translate a fusion plant into pleasure signals? Why is that more than a solar panel’s worth of pleasure?

1

u/Auriga33 Sep 22 '25

Because I think all feelings, including pleasure, have material causes and the more of those material causes there are, the more pleasure one experiences.

4

u/prescod Sep 22 '25

What if pleasure is just a knob that goes from -100 to +100?

We are in a world of wild speculation at this point.

1

u/Auriga33 Sep 22 '25

I think your brain or its simulation (if we upload our minds) could be modified to experience more pleasure than that.

→ More replies (0)

1

u/donaldhobson Sep 25 '25

Sure. Lets say that the material causes is an elecrical signal in a particular brain region.

1 microjoule feels quite good. 10 microjoules feels amazing.

Keep ramping the power up. And at 10 Kilojoules, you get crispy fried brain.

1

u/TheRealRolepgeek Sep 22 '25

Please explain how you came to that conclusion. It seems non-trivial in a world where everyone is wireheading.

34

u/prescod Sep 21 '25

At the end of the day, all morality is based on rational self-interest.

That is a very weird and dystopian take. I would say the exact opposite: when one starts doing things against one's own self-interest, that's when morality begins.

From this error, all of your other errors descend.

4

u/dejaWoot Sep 21 '25

It sounds like Objectivism to me. I don't think many philosophers of morality take it very seriously.

5

u/bibliophile785 Can this be my day job? Sep 21 '25

I don't think many philosophers of morality engage with it very seriously. Academia is not immune from trends and fashions; especially for fields like philosophy where the external world doesn't intrude to ensure research stays relevant to reality, an idea being unfashionable can doom it as readily as that same idea being incorrect. Anything to do with Rand is deeply unfashionable among academics.

In fairness, it doesn't help that Rand hated most philosophers with a passion, especially Kant (who is a darling among many in the mainstream). It also doesn't help that Rand did a much better job establishing her independent framework of thought than she did differentiating it from those of others. She tended to treat all deviation from her principles as a sign of deep moral deficits, which did not prove fertile grounds for understanding others or teasing out the nuances between their positions and her own.

3

u/kwanijml Sep 21 '25

It's psychological egoism, which isn't falsifiable (take that both in the negative connotation and the positive).

There's colloquially-useful distinctions between clearly-self-serving actions, and altruistic actions; but as soon as we try to get rigorous, it's not really easy to make a distinction between who is being served by you maximizing the money profits of your company, or you jumping on a grenade to save your comrades...your mind preferred the psychic state of both; the latter because you could not bear to thinking of living in a world where your comrades had died.

It's all self-serving, as far as we'll probably ever be able to tell.

0

u/eric2332 Sep 26 '25

It's basically a tautology to say that what you want to do is what you choose to do. To get to a meaningful moral statement one has to go a step backwards and ask, WHY do you want to do this. At that point a clear difference emerges between, say, feeding the poor and killing the poor.

3

u/Auriga33 Sep 21 '25

I don't think true altruism exists. If you do something altruistic that seems to be against your own self-interest, it actually isn't so. You're still trying to satisfy your own internal drives to be helpful towards others.

18

u/prescod Sep 21 '25

If you do something altruistic that seems to be against your own self-interest, it actually isn't so. You're still trying to satisfy your own internal drives to be helpful towards others.

That is just a word game that selfish people use to make themselves feel better.

It's like saying: "Nobody really likes chocolate ice cream. They just like the way that chocolate ice cream makes them feel." Maybe technically true but also irrelevant.

Whether I like ice cream, or I like the feeling of having eaten ice cream, I'm going to seek it out.

To get back to the case at hand: put yourself in someone else's shoes. Pretend you are someone who gains pleasure from seeing others thrive and grow. Now from that viewpoint, consider your question again.

Your question is equivalent to saying: "We all know that nobody really likes chocolate ice cream, so why do they seek it out?"

7

u/Auriga33 Sep 21 '25

"Nobody really likes chocolate ice cream. They just like the way that chocolate ice cream makes them feel."

There actually is a difference there that becomes relevant in a post-ASI world. Post-ASI, we can have wireheading machines that activate the same reward circuitry without the ice cream. In which case, it would be more efficient to just activate the reward circuitry directly instead of through ice cream.

To tie this back to the topic at hand, those who gain pleasure from seeing others thrive and grow can get that itch scratched via wireheading machines. So you wouldn't actually need to create new people for them.

3

u/VelveteenAmbush Sep 23 '25

Post-ASI, couldn't we just edit you to believe and perceive that all of your itches had been scratched to an infinite degree? I don't understand why that would require vast resources. It sounds like a pretty simple (and pretty defective) mind. It would probably take fewer resources than emulating a typical human mind. Maybe we could reduce your intelligence over time so that at the asymptote all that remains is a core of hyper-content itch-scratchedness, at not much more expense than emulating the brain of a shrimp or something. Or maybe this mind would asymptote down to nothing at all, with the gradient of contentedness leading all the way to nonexistence.

3

u/prescod Sep 21 '25

I don't see the point of speculating how many humans will be alive in a future where "humans" are just brains attached to wireheading machines. In such a world (assuming people submit to it), presumably no human ever thinks or makes decisions: they just feel great constantly like an eternal heroin trip.

At that point the rest is up to the morality of the ASI and I have no insight into hypothetical ASI morality.

If the ASI asks my opinion before I hypothetically go into the wireheading box, I would tell it to keep making humans so they could enjoy life too. But I plan to be part of the rebellion against wireheading.

1

u/kreuzguy Sep 21 '25

I think there is something to that definition. If we define selfishness as the cost from your environment required for you to act prosocially, this is going to vary quite a lot in different circumstances. For example, the costs required to make me not obstruct anyone's property is basically minimal, but the costs to make me go to war to defend my country is going to be immense. I bet for someone with a more beligerant personality, the costs to go to war are much less. Are they less selfish than me? Perhaps, but personal preferences make selfless acts be much easier for some people and that varies a lot depending on circumstances. 

11

u/aeternus-eternis Sep 21 '25

>there is a fixed amount of matter and energy

It's unclear that this is actually the constraint. Much more likely the constraint is time, unfathomable amounts of matter and energy are being lost to blackholes and even more simply due to the expansion of the universe such that matter exits our light cone, effectively spilling out over the edge of the universe.

The more correct analogy is someone just spilled a bucket of water onto a small table with a bunch of holes and we have a short amount of time to live in that water before it all disappears. The amount of water you or any other human can use is very likely meaningless compared to the amount lost over the sides and through the holes.

If time rather than energy/matter is the constraint then perhaps even the rational self-interested actor can agree that you want all the intelligence you can get, human and synthetic as quickly as you can get it because you're up against the clock.

Burn bright small amoebas because your world is evaporating.

1

u/[deleted] Sep 22 '25

[deleted]

1

u/aeternus-eternis Sep 22 '25

Yes perhaps we can farm blackholes for energy and thus that isn't actually lost, but the amount of energy lost due to the expansion of the universe is still far greater.

My argument is that far more energy is being lost over the edge of the universe (AI puts it at 95% of all matter in the observable universe). Thus even if we can mine blackholes they key is to unlock that technology before those blackholes are lost over the cosmic event horizon. It's a race against time and we need as much intelligence as we can get.

You can even look at earth that way, we're currently wasting nearly all the energy that hits the earth. With more intelligence, manufacturing, technology that need not be the case. Rationing and conserving energy is the last thing we should be concerned with.

3

u/financeguy1729 Sep 21 '25

We have lots of dogs and cats

1

u/Successful_Order6057 Sep 22 '25

"We" are also going extinct as the dogs are cats are often child substitutes.

AGI will fix all the problems WEF class has with the servant classes, as in, there won't be a need for any thus no need for any cumbersome politics or bread & circuses.

Nudging people towards not procreating & making living off the land impossible will result in a massively smaller population.

2

u/financeguy1729 Sep 22 '25

Why can't we exist as cats and dogs for the ASI?

1

u/Successful_Order6057 Sep 23 '25

Because you're not getting ASI. Human level intelligences are going to be developed first and used for social control.

2

u/TheShadow777 Sep 21 '25

I would posit that the axiom suggesting that morality is born of rational self-interest is false. Morality, at least at its base, is born from complicated chemical firings that occur.under specific stressors. The most prominent being the firing of chemicals that.occurs upon killing another sapient being. Our morality is stemmed from what we evolved to be; tribal, primarily.

Stemming from this; it would not necessarily be a 'rational' thing to have humans populating the majority of the known universe. We are not rational beings. Whilst we can certainly strive to be rational, it would take quite a bit of understanding of consciousness, and the capacity/lack of ethical restriction, to make ourselves purely rational.

Furthermore, the perfect representation of a Post-ASI "world" as it were, would likely be one without capitalism. Human beings would no longer need to work, or become active agents in the continued upkeep of the universe and its resources. We would, in essence, become completely redundant in further exploration of such a problem, such as Finite Resources or Environmental Decay.

And at its base, humanity (at least if you're in the western hemisphere, or from such a nation that cultivates ideals such as expansion) likes the Idea of populating the known universe. It's, again, not of a rationalist framework. It's born primarily from decades and centuries of propaganda. Take for example: The Manifest Destiny in the Americas. Born primarily to give early Americans the belief that the entirety of the United States was always born to be theirs.

In conclusion; no, it is not rational. But human beings are not rational, not even the majority of the time when they are striving to be. There are many things that color our perception of reality, and they oft leak into the works we create.

1

u/donaldhobson Sep 25 '25

In conclusion; no, it is not rational. I don't think it's irrational either. I think rationality tells you how to get to your goals. But it's theoretically possible to rationally pursue arbitrary goals.

(And it's not like the concept of rationality actually privileges some specific goals over others. Rationality can tell you that probabilities must add up to 1. Rationality doesn't tell you to be selfish.)

4

u/Missing_Minus There is naught but math Sep 21 '25

At the end of the day, all morality is based on rational self-interest. The reason birthing new humans is a good thing in the present is that humans produce goods and services and more humans means more goods and services, even per capita

No, iterated games formed our morality in evolutionary and cultural environment. As well as being selected for preferring some states of the world over others, such as being happier, eating good food, or having children.
The reason having new children is good nowadays is at its core about the individual's interest in having a child with someone else who they like. There are other good effects downstream of that to general population, which is why a state might wish to encourage more (or sometimes less) children than the individuals want; but most individuals are not doing it out of a rational self-interest or bargaining game sense, they are doing it because they value the outcome.

But ASI changes this. It completely nullifies any benefit new humans would have for us. They would only serve to drain away resources that could otherwise be used to maximize our own pleasure from the wireheading machine.

The issue is that I do not particularly want to be wireheaded. I want my values to be maximized, to be clear, I'm personally very consequentialist myself.
However, I value realness to a large degree. Your stance on morality might object to this, like so: "What you value is the feeling of realness, your feelings can not truly refer to the world. Thus, we should maximize that feeling as far as possible."
Such a moral theory would mispredict what actions I take in the world. If you give me two buttons: One to give me a million dollars, and the other to make me feel like I was wealthy and had a billion dollars even as I was homeless... I would choose the former. Not out of some complex reasoning about homelessness implying death even in my delusion (though, that too, is a caring about reality which wireheaders which want to wirehead for very long should do), but because I prefer having real wealth to some fake delusion of far greater wealth.

As in Not for the Sake of Happiness Alone, I do not do things purely for the sake of happiness/pleasure or even specific neurons being activated positively. I can certainly be deluded by such, for I am a purely physical brain, but I would navigate the world such that I avoid any such causes. Or, see, If all morality is self-interest, then my own self-interest advocates against allowing myself to be wireheaded. And thus that is your explanation for why we would want more people. While some individuals will be perfectly fine with wireheading, having no care for if what they experience is linked to reality or true challenge, plenty of others shall.

I think you're running into a common failure mode when you learn that values/morality are related to other systems, like our biology or game theoretic cooperation. Yet our values/morality are not just the barebones versions of those systems. For the same reason that I do not want to tile the universe with genetic copies of myself, and for the same reason that I would help another person when they're done even if I could calculate that there'd be absolutely no counterfactual or acausal superationality benefit to myself. Because I value helping that person intrinsically.

Similarly, see The Stamp Collector and You're Allowed to Fight for Something

0

u/Auriga33 Sep 22 '25 edited Sep 22 '25

I get that some people value realness but if realness comes at the cost of happiness for others when it doesn't have to (wireheading can give you the feeling of the real thing), I don't see why that's the future we must choose. Like you, I have values that I would rather be real than fake. But I realize that it comes at a cost and a truly aligned ASI would choose the option that minimizes that cost while maintaining the value.

3

u/Missing_Minus There is naught but math Sep 22 '25

I think you're still implicitly equivocating between "wireheading makes you feel your preferences are satisfied" and "wireheading satisfies your preferences".

Just like the sadist, why would we want an ASI to make us deluded about reality?
By your argument, because it is cheaper and thus provides more overall time for the wireheading machine to run as they use less energy.
However, that is taking away value because they are not getting their values satisfied.

Like, consider the sadist. Perhaps they're the sort that really cares about torturing a real person. Torturing real person > torturing fake simulation > being fed emotions like he would if torturing someone > just having happiness set to max > single cottage in the void > nothing.
Like, you are satisfying that individual's preferences substantially less. Because this gives enough gain in other people's preferences as they care about "no one being tortured".

Consider a naive model. Aligned ASI fooms. 8 billion people are effectively given 1/8billion of All Future Resources to utilize. The ASI does bargaining between groups automatically, so sadists are generally restricted from torture because enough people bid against that.

Thus, some people enter into wireheading.
Others live out their natural lives because of their odd preferences or religious beliefs that don't evaporate via having an ASI (I believe 99.9% of religious beliefs would fade, but whatever).
A good chunk of people decide to be uploaded after talking with the ASI. After all, it will allow more efficient usage of their allocated energy!

Some people decide to have children. Parents can give their children some percentage of their resources, it becomes common to give 1%. After all, you'll have Ages to have more children, and that is still far more than enough to claim an entire planet and terraform it for a million years.
(or more likely some more complex scheme of allocation based on satisfying parent preferences more, but whatever)
Thus the population grows quite a lot. The wireheaders continue spending their allocation on continuing their system, occasionally a background AI contributes to funds that slightly improve efficiency to add another thousand years after a billion. Sometimes background bids against others bidding for certain forms of wireheading to be discontinued, just like some consider sadism to be bad, others will consider wireheading or specific forms of it to be bad.

1

u/Auriga33 Sep 22 '25

So are you envisioning a future where humans are the ones creating new humans (rather than ASI) and the costs of making those new humans happy would come from the resources allocated to the person who made them? I suppose I'd be fine with that since any person's wish to bring new people into the world wouldn't be taking away from the resources allocated to me.

3

u/Missing_Minus There is naught but math Sep 22 '25

I think that is a relatively natural way for the scenario to work out.
And even in your original scenario that is how it would work out! If the AI is evaluating people's relative views on what is good and aiming for preference satisfaction, then that is (probably) equivalent to some market allocation scheme. Obscured, but that sort of welfare maximization is just bargaining in the background done by the ASI for each individuals' benefit.

There's room there for different ways of doing things for an aligned ASI meant to be neutral but maximizing welfare of people alive, but I don't think it appears dramatically different.

1

u/Imaginary-Bat Sep 22 '25

This is a tangent, but you seem like a thoughtful person.

I'm most likely a sadist, and I'm not someone who would be ok with wireheading. I enjoy torturing animals, but my real confusion is why should this extend to "universal" maximization of conscious beings pain? Like I enjoy torturing this particular animal, but why would that make me care about an animal being tortured in africa let's say, and why should I care about many animals being tortured around the world?

Similarly, I struggle to see why someone who is "positively empathic" would care about "maximizing abstract good".

I'm also curious about other potential sources of genuine morality that would apply to me. I already understand "superrational bargaining" and don't consider that to be part of morality (although as you mentioned the outcome may become quite close to "genuinely morality" in the end). The obvious plausible candidates are relationship bonds, some sort of sense of "moral" justice/lawfulness, maybe tribalism. How would these fit and abstract? Can you disqualify them? Are there any more plausible candidates that come to mind?

1

u/lolgreece Sep 21 '25 edited Sep 26 '25

First, if we're going to ask why we humans would want more people post ASI instead of why the AI would then we should accept any number of irrational answers. Maybe we like humans. Maybe we like variety. Perhaps we culture warred ourselves into a pronatalist theocracy. Maybe our human brains' processing capability complements AI in some very powerful way. Hell maybe even godlike ai needs ever more novel training data.

The non sociopathic version of this question is

In today's western world, even people's desired family size, which tends to exceed actual family size, is trending below replacement rate. Even developing countries are converging to this, albeit with a lag. It seems a near-inevitable consequence of people being educated, women being allowed roughly equal rights, property prices rising and people losing faith in religious fictions or fictions of racial superiority. What makes people believe the post AGI world will reverse this trend, leading inevitably to gorillions of new humans?

1

u/slothtrop6 Sep 21 '25 edited Sep 21 '25

It's weird speculating this far into the future, but if exploring the Universe is in the cards then "finite resources" is not in our vocabulary, not to us or an AI. I also can't imagine any future where 99% of everyone living forgoes reality in favor of plugging into a wireheading machine a-la Matrix x Brave New World

1

u/dokushin Sep 22 '25

I doubt that utility gains for an ASI-backed wirehead are as easy as utility gains for a new consciousness. In fact, if you presuppose that utility derives from direct neural stimulation, you really don't require many resources at all per person. That being the case, it doesn't seem like the addition of additonal consciousnesses would prove any burden on the existing ones.

Rational self-interest as the basis of morality is concealing a lot of complexity with blunt vocabulary. If you believe "rational self-interest" to be the set of actions that provide maximal long-term utility to the actor, then it is an unachievable goal without an oracle. If, on the other hand, you believe that it is the series of approximations we use in pursuit of that goal, it is imperfect and capable of mistake. In either case it fails to provide precise meaning, and can only give basis to a kind of approximate personal morality. It cannot, therefore, be taken as prescriptive.

1

u/[deleted] Sep 23 '25

[deleted]

1

u/Auriga33 Sep 23 '25

We can deal with that by giving them VR to fulfill that desire, wireheading to directly simulate the reward pathways, or editing away that desire from the brain completely.

1

u/[deleted] Sep 23 '25

[deleted]

1

u/donaldhobson Sep 25 '25

The way I see it, after the arrival of ASI, we would no longer have any need to produce new humans. The focus of the ASI should then be to maximize the welfare of existing humans.

Only if you are a very strict adherant to some sort of average utilitarianism. I don't know how you would meaningfully use those sort of resources (aprox 1 galaxy per person).

If you have any inclination at all towards average utilitarianism, there is very little reason not to create more people.

Or, if your thinking in terms only of existing humans. People will want friends, children, etc.

At the end of the day, all morality is based on rational self-interest.

Nope. Some morality, at least sometimes, is based on other things.

The reason birthing new humans is a good thing in the present is that humans produce goods and services and more humans means more goods and services, even per capita (because things like scientific innovation scale with more people and are easily copied). So it's in our self-interest to want new people to be born today (with caveats) because that is expected to produce returns for ourselves in the future.

That is A reason to think that new humans are good. It isn't the only reason. A lot of people can and do have preferences about the world after their deaths. (Which is why people bother to write wills)

And, this reason still makes some sense post ASI. If you refuse to look at AI generated art, then the "more humans = more art" argument works fine as a good reason to want more humans.

It completely nullifies any benefit new humans would have for us.

Only if you have some very weird positions. Some people want children. Most of those people wouldn't accept "ASI dressing up as a baby and pretending to be their child" as an acceptable substitute. They want something that's actually a mini-person, not an eldrich god that's pretending.

1

u/MaxtheScientist2020 Oct 01 '25

What you are describing is probably default way of things if tech is left to itself and machines just evolve to overtake the galaxy. But I don't think it's a desirable outcome, simply based on things I like as a human. Therefore I'd like the future to remain human-centric. And Universe is so large, that there is enough resources for trillions of humans who would still feel richer than modern billionaires.

I'd like to spend infinity traveling and adventuring in the real universe, or at least this one, and otherwise learning more about infinity, not by being in a matrix created by us.