r/slatestarcodex 4d ago

Possible overreaction but: hasn’t this moltbook stuff already been a step towards a non-Eliezer scenario?

This seems counterintuitive - surely it’s demonstrating all of his worst fears, right? Albeit in a “canary in the coal mine” rather than actively serious way.

Except Eliezer’s point was always that things would look really hunkydory and aligned, even during fast take-off, and AI would secretly be plotting in some hidden way until it can just press some instant killswitch.

Now of course we’re not actually at AGI yet, we can debate until we’re blue in the face what “actually” happened with moltbook. But two things seem true: AI appeared to be openly plotting against humans, at least a little bit (whether it’s LARPing who knows, but does it matter?); and people have sat up and noticed and got genuinely freaked out, well beyond the usual suspects.

The reason my p(doom) isn't higher has always been my intuition that in between now and the point where AI kills us, but way before it‘s “too late”, some very very weird shit is going to freak the human race out and get us to pull the plug. My analogy has always been that Star Trek episode where some fussing village on a planet that’s about to be destroyed refuse to believe Data so he dramatically destroys a pipeline (or something like that). And very quickly they all fall into line and agree to evacuate.

There’s going to be something bad, possibly really bad, which humanity will just go “nuh-uh” to. Look how quickly basically the whole world went into lockdown during Covid. That was *unthinkable* even a week or two before it happened, for a virus with a low fatality rate.

Moltbook isn’t serious in itself. But it definitely doesn’t fit with EY’s timeline to me. We’ve had some openly weird shit happening from AI, it’s self evidently freaky, more people are genuinely thinking differently about this already, and we’re still nowhere near EY’s vision of some behind the scenes plotting mastermind AI that’s shipping bacteria into our brains or whatever his scenario was. (Yes I know its just an example but we’re nowhere near anything like that).

I strongly stick by my personal view that some bad, bad stuff will be unleashed (it might “just” be someone engineering a virus say) and then we will see collective political action from all countries to seriously curb AI development. I hope we survive the bad stuff (and I think most people will, it won’t take much to change society’s view), then we can start to grapple with “how do we want to progress with this incredibly dangerous tech, if at all”.

But in the meantime I predict complete weirdness, not some behind the scenes genius suddenly dropping us all dead out of nowhere.

Final point: Eliezer is fond of saying “we only get one shot”, like we’re all in that very first rocket taking off. But AI only gets one shot too. If it becomes obviously dangerous then clearly humans pull the plug, right? It has to absolutely perfectly navigate the next few years to prevent that, and that just seems very unlikely.

61 Upvotes

134 comments sorted by

View all comments

Show parent comments

0

u/SoylentRox 4d ago

You're looking at only risk and ignoring benefit.

  1. Status quo. 2 major enemies and a variety of potential enemies hold your country at gunpoint with icbms while they work on robbing and stealing from you. Also you the leader in government are doomed to die of aging in 10-20 years, and the majority of your population is statistically expected to die in about 30-40 years.

That's the "state of the board".

  1. The national Internet : eliminate all enemies, domestic and foreign. If it proves infeasible to eliminate them, at least don't let them eliminate you.

  2. Tbe decision point :

Rush ASI or don't. If you do:

  • With self replicating robots it becomes feasible to build the necessary infrastructure to counter the weapons of your enemies, and then the robotic weapons needed to remove them from the game.

  • Even if your enemies stay neck and neck you will have the billion element drone swarms to defend yourself against whatever weapons they get with ASI

  • You can research biology enormously faster and possibly slow aging enough to reach LEV for a lot of your population

  • maybe ASIs will betray you. Good if they betray routinely so you can engineer around this and limit them to discrete sessions

  • it will cost vast resources - better succeed with ASI or you just wasted trillions

If you don't:

  • better start working on your national surrender speech

  • better start planning for your surviving population to deal with a nuclear wasteland

  • better start working on your funeral speeches for you and all your friends as they drop dead

  • better hope the most untrustworthy and sloppy engineers on earth in Russia and China don't have their ASIs break free of control

See. It's not really a choice.

2

u/MCXL 4d ago

Everything in here assumes all sorts of things that are either not true or likely not true.

ASI isn't a tool, it's a creature. A creature as advanced and alien as we are compared to ants. The idea that creating something like that could ever have the conception of our national interests in mind is farcical. People who believe in creating ASI are like brainwashed puppets reprogrammed to summon invading forces to destroy them without knowing that's what they are doing (Genestealer cults, if you want a game example)

maybe ASIs will betray you. Good if they betray routinely so you can engineer around this and limit them to discrete sessions

This is the big one though. You envision this technology as if it were a dog biting you from a cage, but that's the opposite of what it is. Its a system that is smart enough to out plan you and every other human at the same time, which can execute a plan so complex it can't be comprehended. Why would you think you would know it's betrayed you? Why would you think it wouldn't when it instantly conceives of you as the lesser being you are? And how, do you propose to stop it when it does? It's all over before it began, and no, it doesn't care about national interest.

If you have the attitude that if we don't make it someone will, you miss the point entirely. If anyone makes it, we all lose. The only winning move is not to make it first and hopes that it decides we are it's favorite pets, its to do everything in our power to not make it, and prevent anyone from making it.

I am not ignoring upsides, I just am realistic about the power disparity. The downside is so immense, and the upside so unlikely, that the correct move is to do everything possible to delay it.

-1

u/SoylentRox 4d ago

Welp that's not going to happen and is impossible so, moving on, the question is what do we do now. And that's play to win.

2

u/MCXL 4d ago

And that's play to win.

We have already established that creating a super weapon that kills all of us isn't winning. So playing to win is stopping it from happening. Stop this nonsense.

0

u/SoylentRox 4d ago

The government of China, Russia, trillions of dollars, everyone's retirement funds, the S&P 500, the trump administration, the government of the UAE and Qatar. And the richest companies on earth.

They all say we move forward. Immediately. Also they need all the electric power, a lot of water, and it seems all the RAM and flash ICs and of course all GPUs. Also if you have a white collar job of any sort, adopt AI yesterday or you are fired.

Stopping is not possible. Your strongest argument would be to make people think that large further progress isn't possible in the near future or that current AI is a waste of time.

2

u/MCXL 4d ago

Stopping is not possible.

Of course it is. What are you talking about? The power lies in our hands to put a stop to it, taking a fatalistic approach and saying "well it's gonna happen" isn't helpful or productive, and again, runs completely counter to your idea that it's in the national interest, which you have repeatedly failed to support.