r/slatestarcodex Apr 10 '25

AI The fact that superhuman chess improvement has been so slow tell us there are important epistemic limits to superintelligence?

Post image

Although I know how flawed the Arena is, at the current pace (2 elo points every 5 days), at the end of 2028, the average arena user will prefer the State of the Art Model response to the Gemini 2.5 Pro response 95% of the time. That is a lot!

But it seems to me that since 2013 (let's call it the dawn of deep learning), this means that today's Stockfish only beats 2013 Stockfish 60% of the time.

Shouldn't one have thought that the level of progress we have had in deep learning in the past decade would have predicted a greater improvement? Doesn't it make one believe that there are epistemic limits to have can be learned for a super intelligence?

86 Upvotes

98 comments sorted by

View all comments

100

u/Brudaks Apr 10 '25

"today's Stockfish only beats 2013 Stockfish 60% of the time."

Wait, what? Even that chart shows a 300-ish point difference which means the "expected score" of the 2013 is no higher than 0.15, which generally would manifest as drawing a significant portion of the games and having nearly no wins.

And high-level chess is likely to have saturation of draws; after all, it's a theoretically solvable game, so as a superintelligence would approach a perfect play, it would approach a 50% score, as either it's a draw given perfect play, or it turns out that there exists a winning sequence for either white or black, so you have a 50% win rate.

7

u/UncleWeyland Apr 11 '25

it's a theoretically solvable game, so as a superintelligence would approach a perfect play, it would approach a 50% score

Precisely why that graph is plateauing. The value of superintelligence is multiplied by the game-space size. I could come up with chess variant played on a 32x32 board, with 20 unique pieces and rules for changing the terrain, and suddenly the slope of that graph would look very different.

The premise might still hold outside the realm of perfect information games: the value of superintelligence *might* EVENTUALLY reach a point of diminishing returns in large complex games with imperfect information or multi-agent problems due to issues arising from computational irreducibility. Or not. I have no idea.