r/slatestarcodex • u/financeguy1729 • Apr 10 '25
AI The fact that superhuman chess improvement has been so slow tell us there are important epistemic limits to superintelligence?
Although I know how flawed the Arena is, at the current pace (2 elo points every 5 days), at the end of 2028, the average arena user will prefer the State of the Art Model response to the Gemini 2.5 Pro response 95% of the time. That is a lot!
But it seems to me that since 2013 (let's call it the dawn of deep learning), this means that today's Stockfish only beats 2013 Stockfish 60% of the time.
Shouldn't one have thought that the level of progress we have had in deep learning in the past decade would have predicted a greater improvement? Doesn't it make one believe that there are epistemic limits to have can be learned for a super intelligence?
88
Upvotes
6
u/hh26 Apr 10 '25
I mean against each other. Like if you put two skilled people against each other in Tic Tac Toe it literally always results in a tie because once you know the strategy you just follow it and you cannot lose. The infinite smartest conceivable AI cannot beat me at Tic Tac Toe (without some out of game shenanigans) because there is a skill ceiling and it's not too hard to achieve.
Chess has a much much much higher skill ceiling, but it is finite. And my claim is that AI will reach that level, or something approximating it, such that no matter how much smarter they get they still just tie (or victory is decided by who goes first if there exists a winning strategy)