Subject: Always sac the exchange Sun Apr 02, 2023 11:11 pm
Watched some videos of Benjamin Finegold and he is always joking about sacrificing the exchange. Only in the endgame rooks are much stronger than the minor pieces. So I thought maybe just give only four points to a rook in the opening and add extra points the nearer you get to the endgame.
mwyoung
Posts : 880 Join date : 2020-11-25 Location : USA
Subject: Re: Always sac the exchange Mon Apr 03, 2023 4:44 am
Henk wrote:
Watched some videos of Benjamin Finegold and he is always joking about sacrificing the exchange. Only in the endgame rooks are much stronger than the minor pieces. So I thought maybe just give only four points to a rook in the opening and add extra points the nearer you get to the endgame.
This is not needed.
If you are talking about the evaluation used by the modern chess engines, and that use "only" neural networks for their evaluation.
As the neural networks do not score a position with such crude methods as using material value for any of the chess pieces. And then changing the expected value of the pieces in different stages of the game.
This is what has allowed the modern chess engine to greatly surpass even the best human players in positional understanding. As the modern chess engines do not use any set rules for material, space, piece activity, central control, pawn structure, and king safety.
The modern chess engines only uses what the neural network has learned about chess, and then using what it has learned in training. To evaluate any chess position with an expected win, loss, draw ratio for any given chess position.
And all search decisions are guided by this win, loss, draw evaluation that is generated by the neural network.
Henk
Posts : 1383 Join date : 2020-11-17
Subject: Re: Always sac the exchange Mon Apr 03, 2023 9:45 am
Yes I was talking about the old way. Without neural networks. I am not a fan of neural networks as a programmer for it is too much a black box. They always told me to only use a neural network when you are unable to solve it.
Brendan
Posts : 400 Join date : 2020-11-18 Age : 40
Subject: Re: Always sac the exchange Tue Apr 04, 2023 11:59 am
Henk wrote:
Yes I was talking about the old way. Without neural networks. I am not a fan of neural networks as a programmer for it is too much a black box. They always told me to only use a neural network when you are unable to solve it.
I think one can get the best of both worlds by 1. creating an engine with HCE and a beautiful (not necessarily correct) style and then 2. training a net based on this engine's games/eval.
Throw in MCTS search and the thing should be pretty interesting imo. Regardless of which direction you go with style.
I believe dKappe created a "Frosty" net based on the eval of the iCe chess engine, and it came out REALLY positional in style.
I believe our friend Boban is a big fan of this one? Me too actually.
Last edited by Brendan on Tue Apr 04, 2023 5:43 pm; edited 1 time in total
Chris Whittington
Posts : 1254 Join date : 2020-11-17 Location : France
Watched some videos of Benjamin Finegold and he is always joking about sacrificing the exchange. Only in the endgame rooks are much stronger than the minor pieces. So I thought maybe just give only four points to a rook in the opening and add extra points the nearer you get to the endgame.
This is not needed.
If you are talking about the evaluation used by the modern chess engines, and that use "only" neural networks for their evaluation.
As the neural networks do not score a position with such crude methods as using material value for any of the chess pieces. And then changing the expected value of the pieces in different stages of the game.
This is what has allowed the modern chess engine to greatly surpass even the best human players in positional understanding. As the modern chess engines do not use any set rules for material, space, piece activity, central control, pawn structure, and king safety.
The modern chess engines only uses what the neural network has learned about chess, and then using what it has learned in training. To evaluate any chess position with an expected win, loss, draw ratio for any given chess position.
And all search decisions are guided by this win, loss, draw evaluation that is generated by the neural network.
Yes, well, as I posited in the past (forgotten if here or the other forum), armed with a few hundred thousand test positions and any engine, NNUE or otherwise, one can label the test positions with an eval (at d1, d2, d7 whatever) and extract (reverse engineer) by regression analysis, what the engine thinks QRBNP is worth. If you split the test positions into phases, you can see how the engine treats each piece value by phase. I did this for about 20 NN engines and got piece eval graphs against phase for each of them (actually I was looking for “cloney” NNUEs. Actually it worked quite well, you could visually see which engines had been trained on similar data. And the actual plots were interesting from a chessy POV. Needless to say, pearls before swine, and this brilliant piece of research for zero replies!
Watched some videos of Benjamin Finegold and he is always joking about sacrificing the exchange. Only in the endgame rooks are much stronger than the minor pieces. So I thought maybe just give only four points to a rook in the opening and add extra points the nearer you get to the endgame.
This is not needed.
If you are talking about the evaluation used by the modern chess engines, and that use "only" neural networks for their evaluation.
As the neural networks do not score a position with such crude methods as using material value for any of the chess pieces. And then changing the expected value of the pieces in different stages of the game.
This is what has allowed the modern chess engine to greatly surpass even the best human players in positional understanding. As the modern chess engines do not use any set rules for material, space, piece activity, central control, pawn structure, and king safety.
The modern chess engines only uses what the neural network has learned about chess, and then using what it has learned in training. To evaluate any chess position with an expected win, loss, draw ratio for any given chess position.
And all search decisions are guided by this win, loss, draw evaluation that is generated by the neural network.
Yes, well, as I posited in the past (forgotten if here or the other forum), armed with a few hundred thousand test positions and any engine, NNUE or otherwise, one can label the test positions with an eval (at d1, d2, d7 whatever) and extract (reverse engineer) by regression analysis, what the engine thinks QRBNP is worth. If you split the test positions into phases, you can see how the engine treats each piece value by phase. I did this for about 20 NN engines and got piece eval graphs against phase for each of them (actually I was looking for “cloney” NNUEs. Actually it worked quite well, you could visually see which engines had been trained on similar data. And the actual plots were interesting from a chessy POV. Needless to say, pearls before swine, and this brilliant piece of research for zero replies!
Where’s Brendan?
A couple comments, Chris.
1. Where is this data? Sounds very interesting.
2. I'm here! What's up?
Chris Whittington
Posts : 1254 Join date : 2020-11-17 Location : France
Watched some videos of Benjamin Finegold and he is always joking about sacrificing the exchange. Only in the endgame rooks are much stronger than the minor pieces. So I thought maybe just give only four points to a rook in the opening and add extra points the nearer you get to the endgame.
This is not needed.
If you are talking about the evaluation used by the modern chess engines, and that use "only" neural networks for their evaluation.
As the neural networks do not score a position with such crude methods as using material value for any of the chess pieces. And then changing the expected value of the pieces in different stages of the game.
This is what has allowed the modern chess engine to greatly surpass even the best human players in positional understanding. As the modern chess engines do not use any set rules for material, space, piece activity, central control, pawn structure, and king safety.
The modern chess engines only uses what the neural network has learned about chess, and then using what it has learned in training. To evaluate any chess position with an expected win, loss, draw ratio for any given chess position.
And all search decisions are guided by this win, loss, draw evaluation that is generated by the neural network.
Yes, well, as I posited in the past (forgotten if here or the other forum), armed with a few hundred thousand test positions and any engine, NNUE or otherwise, one can label the test positions with an eval (at d1, d2, d7 whatever) and extract (reverse engineer) by regression analysis, what the engine thinks QRBNP is worth. If you split the test positions into phases, you can see how the engine treats each piece value by phase. I did this for about 20 NN engines and got piece eval graphs against phase for each of them (actually I was looking for “cloney” NNUEs. Actually it worked quite well, you could visually see which engines had been trained on similar data. And the actual plots were interesting from a chessy POV. Needless to say, pearls before swine, and this brilliant piece of research for zero replies!
Where’s Brendan?
A couple comments, Chris.
1. Where is this data? Sounds very interesting.
2. I'm here! What's up?
Do you have Python? I’m travelling at the moment but can have some fun stuff with you in a few days if you can cope with it
Watched some videos of Benjamin Finegold and he is always joking about sacrificing the exchange. Only in the endgame rooks are much stronger than the minor pieces. So I thought maybe just give only four points to a rook in the opening and add extra points the nearer you get to the endgame.
This is not needed.
If you are talking about the evaluation used by the modern chess engines, and that use "only" neural networks for their evaluation.
As the neural networks do not score a position with such crude methods as using material value for any of the chess pieces. And then changing the expected value of the pieces in different stages of the game.
This is what has allowed the modern chess engine to greatly surpass even the best human players in positional understanding. As the modern chess engines do not use any set rules for material, space, piece activity, central control, pawn structure, and king safety.
The modern chess engines only uses what the neural network has learned about chess, and then using what it has learned in training. To evaluate any chess position with an expected win, loss, draw ratio for any given chess position.
And all search decisions are guided by this win, loss, draw evaluation that is generated by the neural network.
Yes, well, as I posited in the past (forgotten if here or the other forum), armed with a few hundred thousand test positions and any engine, NNUE or otherwise, one can label the test positions with an eval (at d1, d2, d7 whatever) and extract (reverse engineer) by regression analysis, what the engine thinks QRBNP is worth. If you split the test positions into phases, you can see how the engine treats each piece value by phase. I did this for about 20 NN engines and got piece eval graphs against phase for each of them (actually I was looking for “cloney” NNUEs. Actually it worked quite well, you could visually see which engines had been trained on similar data. And the actual plots were interesting from a chessy POV. Needless to say, pearls before swine, and this brilliant piece of research for zero replies!
Where’s Brendan?
A couple comments, Chris.
1. Where is this data? Sounds very interesting.
2. I'm here! What's up?
Do you have Python? I’m travelling at the moment but can have some fun stuff with you in a few days if you can cope with it
I can install python. Not a programmer, but not a dummy either.
What can we do/do you need?
I'll figure it out.
Just let me know.
Chris Whittington
Posts : 1254 Join date : 2020-11-17 Location : France
Watched some videos of Benjamin Finegold and he is always joking about sacrificing the exchange. Only in the endgame rooks are much stronger than the minor pieces. So I thought maybe just give only four points to a rook in the opening and add extra points the nearer you get to the endgame.
This is not needed.
If you are talking about the evaluation used by the modern chess engines, and that use "only" neural networks for their evaluation.
As the neural networks do not score a position with such crude methods as using material value for any of the chess pieces. And then changing the expected value of the pieces in different stages of the game.
This is what has allowed the modern chess engine to greatly surpass even the best human players in positional understanding. As the modern chess engines do not use any set rules for material, space, piece activity, central control, pawn structure, and king safety.
The modern chess engines only uses what the neural network has learned about chess, and then using what it has learned in training. To evaluate any chess position with an expected win, loss, draw ratio for any given chess position.
And all search decisions are guided by this win, loss, draw evaluation that is generated by the neural network.
Yes, well, as I posited in the past (forgotten if here or the other forum), armed with a few hundred thousand test positions and any engine, NNUE or otherwise, one can label the test positions with an eval (at d1, d2, d7 whatever) and extract (reverse engineer) by regression analysis, what the engine thinks QRBNP is worth. If you split the test positions into phases, you can see how the engine treats each piece value by phase. I did this for about 20 NN engines and got piece eval graphs against phase for each of them (actually I was looking for “cloney” NNUEs. Actually it worked quite well, you could visually see which engines had been trained on similar data. And the actual plots were interesting from a chessy POV. Needless to say, pearls before swine, and this brilliant piece of research for zero replies!
Where’s Brendan?
A couple comments, Chris.
1. Where is this data? Sounds very interesting.
2. I'm here! What's up?
Do you have Python? I’m travelling at the moment but can have some fun stuff with you in a few days if you can cope with it
I can install python. Not a programmer, but not a dummy either.
What can we do/do you need?
I'll figure it out.
Just let me know.
Be good to make a collaboration with you. I’m back in action on Thursday …