Here is some more info about the engine from the author himself on talkchess:
Hi all!
I am Ivan Maklyakov, author of Uralochka.
Sorry for the long silence. I was waiting for an account registration confirmation.
A few words about my engine.
I started project about a year ago. At first it was a simple engine with a 0x88 generator, alpha-beta search and a minimal evaluation function.
Later, bitboards were added instead of 0x88, evaluation was complicated, etc. I also used the Texel method to tune the parameters.
As a result, I got an engine with a rating about 3050 (CCRL scale).
Two months ago I started studying neural networks.
First, there was an unsuccessful attempt to train the network for additional evaluation of the pawn structure.
Further, after several attempts, I implemented a neural network that replaces the evaluation function. This neural network is similar to the NNUE HalfKP, but as I understood it and was able to implement. And now I'm working on improving it and iterative training (the current version is trained using the engine with the previous version). In the early stages, each such iteration gives a good increase in strength.
There is nothing special about neural network. I implemented the neural network myself (but I had problems with vectorization of the output layer calculations. I had to look at how it was implemented in other engines, mainly Koivisto). The network architecture is similar to HalfKP, but with one hidden layer and 12 types of pieces (instead of 10 like in HalfKP). The king square is mapped to a smaller size king area via table.
To generate dataset, engine plays with itself with a depth 5-7 and random moves for the first N plyes. Size of dataset is 500-1200 million positions. Neural network is trained by a Python script using Keras framework.
The engine uses external libraries:
-
https://github.com/jdart1/Fathom - access to Syzygy endgame tables.
-
https://github.com/graphitemaster/incbin - attaching a binary file to an executable file.
-
https://github.com/rogersce/cnpy - saving datasets in NumPy format.
When writing the engine, I used information from:
- Engines Ethereal (https://github.com/AndyGrant/Ethereal) and Igel (https://github.com/vshcherbyna/igel) - looked at the search procedure of modern engines (it is more difficult to understand the search in Stockfish).
- Stockfish engine (https://github.com/official-stockfish/S ... tree/tools) and training utility (https://github.com/glinscott/nnue-pytor ... cs/nnue.md) - looked at the principles of implementing a neural network and generating dataset for training.
- Koivisto engine (https://github.com/Luecx/Koivisto) - looked at the principle of using vector instructions for calculating the output layer of a neural network.
Thanks to the authors of these libraries and engines!
I did not plan to open the source codes yet. Because I'm embarrassed by the poor quality of the code. After refactoring, the sources will be open.
Here is an archive of all previous versions, including my own rating list and changelog.
https://drive.google.com/drive/folders/ ... sp=sharing