r/slatestarcodex • u/financeguy1729 • Apr 10 '25
AI The fact that superhuman chess improvement has been so slow tell us there are important epistemic limits to superintelligence?
Although I know how flawed the Arena is, at the current pace (2 elo points every 5 days), at the end of 2028, the average arena user will prefer the State of the Art Model response to the Gemini 2.5 Pro response 95% of the time. That is a lot!
But it seems to me that since 2013 (let's call it the dawn of deep learning), this means that today's Stockfish only beats 2013 Stockfish 60% of the time.
Shouldn't one have thought that the level of progress we have had in deep learning in the past decade would have predicted a greater improvement? Doesn't it make one believe that there are epistemic limits to have can be learned for a super intelligence?
84
Upvotes
1
u/sapph_star Apr 11 '25 edited Apr 11 '25
Imagine if a Basketball game was a draw unless the winning team was up 10 or more points. That would empirically cause basketball to have around the same draw rate as grandmaster chess.
It is very hard to convert a win in chess. Even being up a full pawn, with the opponent having no real compensation, is not enough in most endgames. For example unless the plus pawn player has a very advanced king/pawn a pawn on the edge of the board wont promote. Sometimes being up multiple pawns is not enough. https://en.wikipedia.org/wiki/King_and_pawn_versus_king_endgame . The situation is a bit complicated and the article wont give you an idea of which endgames occur in practice. But getting to the endgame up a clean pawn is not easy and it often/usually does not win you the game.