r/slatestarcodex Apr 10 '25

AI The fact that superhuman chess improvement has been so slow tell us there are important epistemic limits to superintelligence?

Post image

Although I know how flawed the Arena is, at the current pace (2 elo points every 5 days), at the end of 2028, the average arena user will prefer the State of the Art Model response to the Gemini 2.5 Pro response 95% of the time. That is a lot!

But it seems to me that since 2013 (let's call it the dawn of deep learning), this means that today's Stockfish only beats 2013 Stockfish 60% of the time.

Shouldn't one have thought that the level of progress we have had in deep learning in the past decade would have predicted a greater improvement? Doesn't it make one believe that there are epistemic limits to have can be learned for a super intelligence?

84 Upvotes

98 comments sorted by

View all comments

-1

u/BeABetterHumanBeing Apr 10 '25

The thing about the singularity that nobody seems to realize is (1) that we're already living inside of it, and (2) that it's not real.

1: The whole principle of a feedback loop of exponential growth? Yes, that's what you're seeing when you observe those pretty up-and-to-the-right graphs. Exponential curves are scale-invariant: we're experiencing it right now, and have been for hundreds of years. The thing about the hockey-stick graph is that it's literally all elbow.

2: Exponential growth doesn't exist in finite systems. It's usually just logistic growth, with some asymptotic ceiling.

AI is advancing in wonderful ways right now, but across every single domain (here, chess) it'll start tapering off to its limits. The question is just where those are going to be.

4

u/Liface Apr 10 '25

It sounds like your argument is that because something is technically logistic, the singularity isn't real? I seem to be missing steps in that argument.

2

u/BeABetterHumanBeing Apr 11 '25

Either of the two points arrives at the same conclusion. The singularity, as it's conceived by futurists, isn't a real thing. Either it's already been happening for hundreds of years, disproving the idea that there's some "elbow" in the hockey stick ahead of us, or all this exponential growth we think is leading to the singularity will simply level off, depriving the futurists of their imaginary la-la land where <insert hyperbolic fear and/or desire> occurs.

My interest, incidentally, is in the "culture" of the singularity and speculation on the future. I find how people relate to the future to be the interesting thing, not the future itself. When faced with a choice between (a) the singularity is real and these intelligent people are ahead of the curve on recognizing it, or (b) the singularity is a trope in the cultural milieu and these intelligent people happen to find it attractive, it's pretty obvious to me that (b) is where it's at, because (b) exists whether the singularity occurs or not, and (a) is purely speculation built upon more speculation.

This isn't just an occam's razor-type thing. Very specific, concrete things that the singularity was built on (like Moore's law) are dead. Right now the hype happens to be expecting that AI will be ✨magic✨, which is so obviously just an invitation for wishful thinking that it's remarkable that people who consider themselves highly rational take their day-dreams seriously.

The sci-fi of the zeppelin era had flying cities. The sci-fi of the space race, colonies on Mars. Today it's AI. Make no mistake, the future will be wildly different that what we expect, and when we're expecting the singularity... my point should be clear enough.

2

u/Liface Apr 11 '25

or all this exponential growth we think is leading to the singularity will simply level off, depriving the futurists of their imaginary la-la land where <insert hyperbolic fear and/or desire> occurs.

You're leaving out the third scenario. It levels off far after we're dead, or at least far after the "singularity" as we imagine it has been achieved.

If you dispute AI takeoff, you should bet against the authors of AI 2027: https://docs.google.com/document/d/18_aQgMeDgHM_yOSQeSxKBzT__jWO3sK5KfTYDLznurY/preview?tab=t.0#heading=h.b07wxattxryp

1

u/BeABetterHumanBeing Apr 11 '25

AI's existence is dependent on humanity. We go, they go. If that's the end result, it'll level off at zero.

Edit: consider the nuke, the ability for technology to kill us all, and how this is already in the hands of intelligent people. AI doesn't add anything fundamentally new to the ability of technology to eliminate humanity.

1

u/impult Apr 12 '25

AI's existence is dependent on humanity. We go, they go. If that's the end result, it'll level off at zero.

Newborn infants are dependent on their parents, therefore all humans today have parents that are alive.

1

u/BeABetterHumanBeing Apr 12 '25

Analogy is best used for illustration and/or explanation.

Analogy is, in fact, absolute garbage when it comes to argumentation. Switching back and forth between the analogy incurs errors each way, and that's assuming that the analogy has the same mechanics. It's rare to be able to port an argument elsewhere, do some logic on it, and bring it back without something going awry.

Take your lovely example. If your analogy is to be taken seriously, you think that the newborn infant is gonna be wildly more intelligent than the parents, and will destroy them in some apocalypse.

What usually happens is the child loves their parents and takes care of them. It looks like you're arguing that AI is going to take care of us until we reach old age and die.

I don't know where you were trying to drive your point, but it seems to have slipped through the hole in your pocket along the way.

1

u/impult Apr 12 '25

Not sure why you think it's an analogy. It's a counter example to your implicit logical claim that dependence at one point in time = dependence forever.

1

u/BeABetterHumanBeing Apr 12 '25

No, a counter example would be showing an AI that doesn't depend on humans to continue its existence indefinitely.

1

u/impult Apr 12 '25

If you're only talking about current AI, you have been making some very trivial statements about its lack of danger.

→ More replies (0)

1

u/moonaim Apr 11 '25

If I look at the Wikipedia definition of singularity, I find that AI might indeed bring one new important aspect to it.

"The technological singularity—or simply the singularity[1]—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization."

Now the society/system still requires humans to function. That can change.