Hete is a performant Deep-CFR network trainer in c++ using mlx.
- Decoder only poker GPT in one file
src/models/model.h - Minimal 2+ player poker engine
- Monte carlo outcome sampler for game rollouts
- CFR training loop
- Fast hand evaluations using OMPEval library
terminology:
- mbb = 1/1000 * big blind
- baseline statistics = difference between outcome of you played vs. how slumbot would have played. more positive is better.
- mbb_per_hand = average mbb winning across 10000 games
winning by 50mbb or more per hand on average is considered a significant win between professionals.
performance of my latest run against slumbot.com:
there's work to do.
first clone hete:
git clone https://github.com/pythonlearner1025/hete.git &&
cd hete
install dependencies:
- build OMPEval:
git clone https://github.com/pythonlearner1025/OMPEval.git &&
cd OMPEval &&
make clean &&
make
- build and install mlx (might have to sudo make install):
git clone https://github.com/ml-explore/mlx.git &&
cd mlx &&
mkdir -p build && cd build &&
cmake .. && make -j &&
make install
- build hete:
mkdir build && cd build && cmake ..
set your parameters in hete/src/constants.h and run:
make && ./main
trained models will be saved under:
hete/out/<timestamp>/<cfr_iteration_index>/<player_index>
create and activate virtual environment:
python -m venv env && . env/bin/activate
pip install -r requirements.txt
evaluate against slumbot:
python eval.py --wandb 0 --num_hands 1000 --auto 1
optionally stream to wandb with --wandb 1
plot performance against slumbot:
python plot.py

