Skip to content

Releases: QueensGambit/CrazyAra

Checkmating One, by Using Many

02 Mar 19:16
bb3b5b6

Choose a tag to compare

The release contains the model files of our accepted paper: Checkmating One, by Using Many: Combining Mixture of Experts With MCTS to Improve in Chess at IEEE Transactions on Games:

Model information:

  • big_3x_risev33_after_rl.zip: Large risev33 network trained on KingBaseLight and later improved via RL for 10 model updates.
  • big_3x_risev33.zip: Large risev33 network trained on KingBaseLight dataset.
  • checkmating_one_by_using_many_models.zip: Singular models trained via separated learning, weighted learning, and staged learning.
  • Pommerman_MoE_Models.zip: Includes the mixture of experts models for Pommerman.

All models, except the Pommerman ones, are only usable for chess.

The repository for our Mixture of Experts (MoE) experiments for Pommerman can be found here:

Instructions for reproducing the results in the paper

Game phase extraction for training in game_phase_detector.py.
Game phase extraction for MCTS in the board.cpp get_phase function. Change first if condition to (num_phases == 3 && true) to use lichess phase definition when using three experts.

Workflow of training one phase expert network:

  1. Gather dataset pgn files and specify default_dir in main_config.py. The pgn files should be placed in <default_dir>/pgn/<set_type>, see main_config.py for more details.
  2. Specify phase and phase_definition (either "lichess" or "movecountX") in main_config.py
  3. Generate planes (input representation) from pgn files with convert_pgn_to_planes.ipynb
  4. Use train_cnn.py to train the expert
    • tc.seed=9 was used for all our experiments
    • Specify tc.export_dir for the model export directory
    • Adjust phase_weights as needed. Use 1.0 for equal weights for all phases.
    • For the weighted learning approach, set phase in main_config.py to None and only adjust phase_weights in train_cnn.py
  5. Repeat process for each expert

Workflow to use phase experts as an MoE agent in MCTS:

  1. Copy .onnx and .tar files from the best-model folder of each expert training export directory
  2. Paste .onnx and .tar files from expert i to a model directory, e.g. /data/model/ClassicAra/chess/<moe_name>/phase<i> (one phase<i> folder for each phase/expert)
  3. Build ClassicAra binary, see Build Instructions
  4. Launch the ClassicAra binary.
    • Specify the model directory as needed, e.g. "setoption name Model_Directory value /data/model/ClassicAra/chess/correct_phases"
    • Specify Batch_Size as needed, e.g. "setoption name Batch_Size value 8"
    • Specify GPU as needed, e.g. "setoption name First_Device_ID value 0"
    • Generate .trt files by executing the "isready" command
  5. Run Cutechess match to compare different approaches, see run_cutechess_experiments.py for exemplary cutechess commands

Aras 1.0.5 (CrazyAra, ClassicAra, MultiAra, XiangqiAra, StrategoAra))

08 Aug 22:07

Choose a tag to compare

Installation instructions

The default previous ClassicAra model is included within each release package.
Moreover, the binary packages include the required inference libraries for each platform.

The newer ClassicAra models can be downloaded in release 1.0.4.
You may choose alpha_vil_fx_models.zip and select a model size depending on your GPU/CPU and time-control.
At a very low time control (e.g. 30ms/Move), it is recommended to reduce the Batch-Size to 16.

The models for CrazyAra and MultiAra the models should be downloaded separately and unzipped (see release 0.9.5).

  • CrazyAra-rl-model-os-96.zip
  • MultiAra-rl-models.zip (improved MultiAra models using reinforcement learning (rl) )
  • MultiAra-sl-models.zip (initial MultiAra models using supervised learning)

For XiangqiAra you can download XiangqiAra-sl-model.zip (see release 0.9.9).

Next, move the model files into the model/<engine-name>/<variant> folder.

Stratego is only included in the Linux release files as OpenSpiel is not officially supported on Windows and Mac.

Main changes

  • Check for is_terminal() directly after creating a new node #204
  • Virtual_Visit, Virtual_Mix, Virtual_Offset #205 (this led to ~100 Elo increase at very low node count / very fast TC)

Bug fixes

  • Fix 960 initialization problem #207 (this affected CrazyAra version >= 0.9.5 and resulted in a ~30 Elo decrease)
  • Fix first_and_second_max() #206

Regression test (from #205)

TC: 30ms/move
-each option.Batch_Size=16 option.Fixed_Movetime=30

Score of ClassicAra_1.0.5 vs ClassicAra-0.9.5: 526 - 243 - 231  [0.641] 1000
Elo difference: 101.1 +/- 19.4, LOS: 100.0 %, DrawRatio: 23.1 %
TC: 1min+0.1s game
-openings file=UHO_V3_8mvs_big_+140_+169.epd -each option.Batch_Size=16

Score of ClassicAra_1.0.5 vs ClassicAra-0.9.5
Elo difference: 6.27 +/-  23.28

Inference libraries

The following inference libraries are used in each package:

  • Aras_1.0.5_Linux_TensorRT
    • TensorRT-8.2.3.0.Linux.x86_64-gnu.cuda-11.4.cudnn8.2
  • Aras_1.0.5_Win_TensorRT
    • TensorRT-8.2.2.1.Windows10.x86_64.cuda-11.4.cudnn8.2
  • Aras_1.0.5_Linux_OpenVino.zip
    • openvino_toolkit_ubuntu18_2023.0.1.11005
  • Aras_1.0.5_Mac_OpenVino.zip
    • openvino_toolkit_macos_10_15_2023.0.1.11005
  • Aras_1.0.5_Win_OpenVino.zip
    • openvino_toolkit_windows_2023.0.1.11005

Models - Representation Matters: The Game of Chess Poses a Challenge to Vision Transformers

20 Jun 21:21

Choose a tag to compare

This release contains the different models used in the final comparision in our paper: Representation Matters: The Game of Chess Poses a Challenge to Vision Transformers. Put the model files (.tar, .onnx) into the corresponding model directory (e.g. ./model/ClassicAra/chess/). Only the .onnx-files are used for inference. You can remove the .tar-files if you are not interested in reinforcement learning or fine tuning the model.

Update (2023-26-10)
For more exhaustive information regarding ..., please consult:

Update: 2024-06-10: Fixed Batchnorm for alpha_vil_fx_models.zip and alpha_vil_model.zip

ClassicAra 1.0.3

18 Aug 16:45
7ccb332

Choose a tag to compare

ClassicAra 1.0.3 Pre-release
Pre-release

This version has been submitted to the TCEC Season 23 event.

The engine.json configuration file and update.sh shell script can be used to replicate the testing environment on a multi-GPU Linux operating system.

Changelog

  • Update MCTS solver for MCTS_SINGLE_PLAYER (#184)

  • Dockerfile Pytorch Support (#183)

  • Fen position from epd file (#182)

  • Dynamic ONNX shape support (#181)

  • Update RL-Loop (#180)

  • Pytorch Deep Learning Backend (#179)

  • Update binaryio.py (#178)

  • Rename mctsmatch and evaltournament (#177)

(no improvement strength wise)

StrategoAra, Hex, DarkHex 1.0.2 (models only)

08 Aug 16:21
7e46df6

Choose a tag to compare

This release features the model files for BarrageStratego, Darkhex and Hex.

ClassicAra 1.0.1

05 Jul 10:37

Choose a tag to compare

ClassicAra 1.0.1 Pre-release
Pre-release

This version has been submitted to the FRC 5 and DFRC 1 event.

The engine.json configuration file and update.sh shell script can be used to replicate the testing environment on a multi-GPU Linux operating system.

Changelog

  • Tablebase Mating Sequences (#176)
  • Rename mctsmatch and evaltournament (#177)
  • Update binaryio.py (#178)
  • Pytorch Deep Learning Backend (#179)
  • Update RL-Loop (#180)

ClassicAra 1.0.0

13 May 15:39

Choose a tag to compare

ClassicAra 1.0.0 Pre-release
Pre-release

This version has been submitted to the TCEC Cup 10 event.

The engine.json configuration file and update.sh shell script can be used to replicate the testing environment on a multi-GPU Linux operating system.

Changelog

  • Fix terminal solver for MCTS_SINGLE_PLAYER #168

  • Remove dependency of SF for non chess related environments #169

  • fixed params file selection from the main_config for the neural_net_api.py #171

  • Hex #172

  • Remove Child_Threads and SearchThreadMaster #173

  • Backend TensorRT 7 #174

Aras 0.9.9 (CrazyAra, ClassicAra, MultiAra, XiangqiAra)

14 Feb 20:36

Choose a tag to compare

Notes

Features

  • First experimental XiangqiAra release.

    • Move generation back-end and Xiangqi ruleset is based on Fairy-Stockfish.
    • Uses supervised neural network trained on 10k human Xiangqi games.
      Please refer to the thesis Evaluation of Monte-Carlo Tree Search for Xiangqi by Maximilian Langer, pdf for more information.
  • UCI_Chess_960 support as introduced in https://github.com/QueensGambit/CrazyAra/releases/tag/0.9.8. (However, no official 960 network yet.)

  • TensorRT API Update #164

Major bug fixes

  • Handle flooding of UCI-commands (#167)
    • CrazyAra Going To Infinite Analysis Mode On 1 Position (Can't Be Stopped) In Liground After Making Two Moves On The Board (#81)
  • Avoid repeating positions in Xiangqi (closes #101) #166

TCEC

This version has been submitted to the TCEC Season 22.

ClassicAra 0.9.9 uses the wdlp-rise3.3-input3.0 model which was trained on the Kingbase2019lite data set as for release 0.9.5.

The engine.json configuration file and update.sh shell script can be used to replicate the testing environment on a multi-GPU Linux operating system.

Installation instructions

The latest ClassicAra model is included within each release package.
Moreover, the binary packages include the required inference libraries for each platform.

However, the models for CrazyAra and MultiAra the models should be downloaded separately and unzipped (see release 0.9.5).

  • CrazyAra-rl-model-os-96.zip
  • MultiAra-rl-models.zip (improved MultiAra models using reinforcement learning (rl) )
  • MultiAra-sl-models.zip (initial MultiAra models using supervised learning)

For XiangqiAra you can download XiangqiAra-sl-model.zip (see release 0.9.9).

Next, move the model files into the model/<engine-name>/<variant> folder.

Inference libraries

The following inference libraries are used in each package:

  • Aras_0.9.9_Linux_TensorRT
    • TensorRT-8.2.3.0.Linux.x86_64-gnu.cuda-11.4.cudnn8.2
  • Aras_0.9.9_Win_TensorRT
    • TensorRT-8.0.1.6.Windows10.x86_64.cuda-11.3.cudnn8.2
  • Aras_0.9.9_Linux_OpenVino.zip
    • OpenVino 2021.4.582 LTS
  • Aras_0.9.9_Mac_OpenVino.zip
    • OpenVino 2021.4.582 LTS
  • Aras_0.9.9_Win_OpenVino.zip
    • OpenVino 2021.4.582 LTS

Updates

2022-05-20: Aras_0.9.9_Win_OpenVino.zip: Fixed spelling of folder name: XinagqiAra -> XiangqiAra (thanks to @piladinmew for the hint)

ClassicAra 0.9.8

19 Dec 17:00

Choose a tag to compare

ClassicAra 0.9.8 Pre-release
Pre-release

This version has been submitted to the TCEC FRC 4 event.
The option UCI_Chess960 has been added in ClassicAra 0.9.8 by default.

The engine.json configuration file and update.sh shell script can be used to replicate the testing environment on a multi-GPU Linux operating system.

Due to some difficulties in converting a newly trained network to ONNX, the same neural network model is used as in classical chess.

ClassicAra 0.9.7.post0

01 Nov 23:15

Choose a tag to compare

This version has been submitted to the TCEC Swiss 2 event.
ClassicAra 0.9.7 has a higher GPU and CPU utilization thanks to a higher batch size and more threads (#160).

The engine.json configuration file and update.sh shell script can be used to replicate the testing environment on a multi-GPU Linux operating system.

Regression test

  • TC: 7s + 0.1s
  • Opening suite: Unbalanced_Human_Openings_V3/UHO_V3_+150_+159/UHO_V3_8mvs_big_+140_+169.epd
Score of ClassicAra 0.9.7 (Threads 2, ChildThreads 4, BSize 64) vs ClassicAra 0.9.6 (Threads 2, BSize 16):
 64 - 26 - 72 [0.617]
Elo difference: 83.0 +/- 40.2, LOS: 100.0 %, DrawRatio: 44.4 %

162 of 1000 games finished.
  • 0.9.7.post0: Deactivated removed get_avg_depth() implementation to avoid potential crash

Known issues

  • TensorRT Memory Free Error (#161)