Welcome to NEAT-Python’s documentation!
Warning
Breaking Changes — cumulative through v2.1
If you are upgrading from an earlier version, note the following breaking changes:
v2.0:
CTRNN.create()no longer accepts atime_constantargument. Time constants are now per-node evolvable gene attributes, configured viatime_constant_*parameters in the[DefaultGenome]config section. Checkpoints created with v1.x are not loadable in v2.0 or later.v1.0: Innovation number tracking fully implemented per the NEAT paper; checkpoints from v0.x are not compatible.
ThreadedEvaluatorandDistributedEvaluatorwere removed — useParallelEvaluatorinstead. All required configuration parameters must now be explicitly specified in the config file.
See the Migration Guide guide for detailed upgrade instructions.
NEAT is a method developed by Kenneth O. Stanley for evolving arbitrary neural networks. NEAT-Python is a pure Python implementation of NEAT, with no dependencies other than the Python standard library.
Currently this library supports Python versions 3.8 through 3.14, as well as PyPy 3.
For academic researchers: See NEAT-Python for Academic Research for guidance on using neat-python in research publications.
Many thanks to the original authors of this implementation, Cesar Gomes Miguel, Carolina Feher da Silva, and Marcio Lobo Netto!
Note
Some of the example code has other dependencies. For your convenience there is a conda environment YAML file in the examples directory you can use to set up an environment that will support all of the current examples. TODO: Improve README.md file information for the examples.
For further information regarding general concepts and theory, please see Selected Publications on Stanley’s website, or his AMA on Reddit.
If you encounter any confusing or incorrect information in this documentation, please open an issue in the GitHub project.
Contents:
Understanding NEAT
User Guides
- NEAT-Python for Academic Research
- Configuration file description
- Reproducibility
- Cookbook: Common Patterns
- How to: Set Specific Output Activation Functions
- How to: Use Parallel Evaluation
- How to: Use GPU-Accelerated Evaluation
- How to: Save and Restore Checkpoints
- How to: Debug “Population Not Evolving”
- How to: Interpret Fitness Trends
- How to: Control Network Complexity
- How to: Handle Different Output Ranges
- How to: Configure for Different Problem Types
- Common Gotchas
- Next Steps
- Customizing Behavior
- Overview of builtin activation functions
- Continuous-time recurrent neural network implementation
- Network Export
API Reference