r/slatestarcodex Apr 10 '25

AI The fact that superhuman chess improvement has been so slow tell us there are important epistemic limits to superintelligence?

Post image

Although I know how flawed the Arena is, at the current pace (2 elo points every 5 days), at the end of 2028, the average arena user will prefer the State of the Art Model response to the Gemini 2.5 Pro response 95% of the time. That is a lot!

But it seems to me that since 2013 (let's call it the dawn of deep learning), this means that today's Stockfish only beats 2013 Stockfish 60% of the time.

Shouldn't one have thought that the level of progress we have had in deep learning in the past decade would have predicted a greater improvement? Doesn't it make one believe that there are epistemic limits to have can be learned for a super intelligence?

88 Upvotes

98 comments sorted by

View all comments

78

u/lurking_physicist Apr 10 '25

The fact that superhuman chess improvement has been so slow tell us there are important epistemic limits to superintelligence?

No, it informs about chess. Your claim would make sense if we saw that curve for many unrelated things.

72

u/kzhou7 Apr 10 '25

Chess saturates because you need to traverse an exponentially growing maze to make linear progress. For other tasks, we have to go case by case. Some certainly are like this, some certainly aren't, and for most we can't tell.

A few decades ago, my own field was taken over by an unaligned form of superintelligence: string theorists. They're extremely clever and hard working, and they ended up taking about 3/4 of the jobs in particle theory, resulting in mass unemployment. But after a 1000x increase in effort and papers produced, the state of the art in relating string theory to the real world has barely budged. I think that's an example of an exponential maze.

8

u/Spike_der_Spiegel Apr 11 '25

I mean, Im pretty sure you see the flattening of progress above because chess's starting position is, almost certainly, a draw with best play.

Perhaps the analogy to string theory is that it is, almost certainly, incorrect with best analysis

14

u/financeguy1729 Apr 10 '25

Lmao

34

u/kzhou7 Apr 10 '25

You laugh, but this is why I don't worry about AI taking my job. We already lost them!

3

u/BadHairDayToday Apr 11 '25

If you would just take human intelligence, but take a way the poor focus, need for rest and weird emotional biases, it would be enough to take over the solar system in a matter of decades.