Hacker News new | past | comments | ask | show | jobs | submit login

This is in fact not how a chess engine works. It has an evaluation function that assigns a numerical value (score) based on a number of factors (material advantage, king "safety", pawn structure etc).

These heuristics are certainly "good enough" that Stockfish is able to beat the strongest humans, but it's rarely possible for a chess engine to determine if a position results in mate.

I guess the question is whether we can write a good enough objective function that would encapsulate all the relevant attributes of "good code".




An automated objective function is indeed core to how alphago, alphazero, and other RL + deep learning approaches work. Though it is obviously much more complex, and integrated into a larger system.

The core of these approaches are "self-play" which is where the "superhuman" qualities arise. The system plays billions of games against itself, and uses the data from those games to further refine itself. It seems that an automated "referee" (objective function) is an inescapable requirement for unsupervised self-play.

I would suggest that Stockfish and other older chess engines are not a good analogy for this discussion. Worth noting though that even Stockfish no longer uses a hand written objective function on extracted features like you describe. It instead uses a highly optimized neutral network trained on millions of positions from human games.


Maybe I am misunderstanding what you are saying, but eg stockfish, given time and threads, seems very good at finding forced checkmates within 20 or more moves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: