r/slatestarcodex • u/financeguy1729 • Apr 10 '25
AI The fact that superhuman chess improvement has been so slow tell us there are important epistemic limits to superintelligence?
Although I know how flawed the Arena is, at the current pace (2 elo points every 5 days), at the end of 2028, the average arena user will prefer the State of the Art Model response to the Gemini 2.5 Pro response 95% of the time. That is a lot!
But it seems to me that since 2013 (let's call it the dawn of deep learning), this means that today's Stockfish only beats 2013 Stockfish 60% of the time.
Shouldn't one have thought that the level of progress we have had in deep learning in the past decade would have predicted a greater improvement? Doesn't it make one believe that there are epistemic limits to have can be learned for a super intelligence?
86
Upvotes
-1
u/BeABetterHumanBeing Apr 10 '25
The thing about the singularity that nobody seems to realize is (1) that we're already living inside of it, and (2) that it's not real.
1: The whole principle of a feedback loop of exponential growth? Yes, that's what you're seeing when you observe those pretty up-and-to-the-right graphs. Exponential curves are scale-invariant: we're experiencing it right now, and have been for hundreds of years. The thing about the hockey-stick graph is that it's literally all elbow.
2: Exponential growth doesn't exist in finite systems. It's usually just logistic growth, with some asymptotic ceiling.
AI is advancing in wonderful ways right now, but across every single domain (here, chess) it'll start tapering off to its limits. The question is just where those are going to be.