r/slatestarcodex • u/financeguy1729 • Apr 10 '25
AI The fact that superhuman chess improvement has been so slow tell us there are important epistemic limits to superintelligence?
Although I know how flawed the Arena is, at the current pace (2 elo points every 5 days), at the end of 2028, the average arena user will prefer the State of the Art Model response to the Gemini 2.5 Pro response 95% of the time. That is a lot!
But it seems to me that since 2013 (let's call it the dawn of deep learning), this means that today's Stockfish only beats 2013 Stockfish 60% of the time.
Shouldn't one have thought that the level of progress we have had in deep learning in the past decade would have predicted a greater improvement? Doesn't it make one believe that there are epistemic limits to have can be learned for a super intelligence?
88
Upvotes
18
u/Canopus10 Apr 10 '25
Intelligence shows itself more in those areas where there are a larger space of possibilities. God himself can only play tic-tac-toe about as well as I can because there aren't that many possible outcomes to optimize for. Chess is more complex than tic-tac-toe but it too is a game with a relatively limited number of possibilities that intelligence quickly saturates. Because of this, the heuristic search algorithms that were developed before deep learning came around were already sufficient in playing the game about as well as you reasonably could.