The last meaningful change in Rodent's evaluation function starts here: https://github.com/nescitus/rodent-iv/blob/master/sources/src/eval.cpp#L104
Because of suboptimal variable names, some verbal explanation is in order. In lines 104 and 105 engine sets game phase values needed for normal interpolation of piece/square tables. Later it calculates two different piece/square table scores, based on distinct sets of tables. Two variables, primaryHypothesis and secondaryHypothesis, are used to keep and compare these two scores. Both of them are already scaled for game phase, as it is usually done. Then, at line 115, the difference between primaryHypothesis and secondaryHypothesis is calculated. This difference in turn is used to calculate the value "shift", used to modify percentages/weights that will be applied to both hypothetical scores. There's nothing special about using square root of score difference - it is just the first formula that happened to work. Lines 119-120 read user-defined percentage values, and then magic begins to happen: higher of hypothetical scores gets its weight raised, lower gets it lowered.
To put the long story short: engine calculates two different piece/square table scores. Both of them contribute to final eval, but higher weight is applied to the partial score that gives a better result. Current code picks score better for white, but it might be interesting to use score that is better for the engine side.
I have implemented this idea 4 times: in Rodent, in a private engine that will be released someday, and in two failed attempts at creating a small engine to showcase this algorithm. 3 times out of 4 I got result that was clearly better than using just one set of piece/square tables or than mixing them using a fixed percentage.
I stopped working on it for two reasons: One was the fact that I could not post on Talkchess (it took me 3 months to decide to use VPN and another one to configure it in a way that allowed me to post). The other was advent of NNUE, which made changes to traditional evaluation function seem insignificant. Before that, I managed to discover that the same technique can be applied to other big evaluation factors. Rodent does the same trick with two different mobility sums. One of my failed test engines lumped together piece/square table and king tropism score.
Recently two things occured to me. One is that, had I worked on this idea further, it might have converged to something NNUE-like. NNUE uses more tables, they are more fine-grained, and its mixing mechanism, constituted by a couple of layers of neural network, is much more advanced. Yet the basic idea seems vaguely similar. To put things into perspective: right now, what I describe looks like musings of an engineer working on improving a kerosene lamp long after lightbulb has been invented.
But the other thing is: perhaps people working on NNUE will extract one information from this writeup. I used predefined sets of piece/square tables
. Rodent allows its user to pick two out of 4 or 5 different table sets. Once you pick them, there are only three parameters to tune: two weights for both sets and a third one determining how fast the shift value changes. So would it be possible to populate the first layer of NNUE with some known values, speeding up learning process significantly?