Skip to content

SimonBlanke/Gradient-Free-Optimizers

Gradient-Free-Optimizers Logo


Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.

Tests Coverage


Documentation Homepage · Optimizers · API Reference · Examples
On this page Features · Examples · Concepts · Citation


Bayesian Optimization on Ackley Function

Gradient-Free-Optimizers is a Python library for gradient-free optimization of black-box functions. It provides a unified interface to 21 optimization algorithms, from simple hill climbing to Bayesian optimization, all operating on discrete numerical search spaces defined via NumPy arrays.

Designed for hyperparameter tuning, simulation optimization, and any scenario where gradients are unavailable or impractical. The library prioritizes simplicity: define your objective function, specify the search space, and run. It serves as the optimization backend for Hyperactive but can also be used standalone.

LinkedIn


Installation

pip install gradient-free-optimizers

PyPI Python

Optional dependencies
pip install gradient-free-optimizers[progress]  # Progress bar with tqdm
pip install gradient-free-optimizers[sklearn]   # scikit-learn for surrogate models
pip install gradient-free-optimizers[full]      # All optional dependencies

Key Features

21 Optimization Algorithms
Local, global, population-based, and sequential model-based optimizers. Switch algorithms with one line of code.
Zero Configuration
Sensible defaults for all parameters. Start optimizing immediately without tuning the optimizer itself.
Memory System
Built-in caching prevents redundant evaluations. Critical for expensive objective functions like ML models.
Discrete Search Spaces
Define parameter spaces with familiar NumPy syntax using arrays and ranges.
Constraints Support
Define constraint functions to restrict the search space. Invalid regions are automatically avoided.
Minimal Dependencies
Only pandas required. Optional integrations for progress bars (tqdm) and surrogate models (scikit-learn).

Quick Start

import numpy as np
from gradient_free_optimizers import HillClimbingOptimizer

# Define objective function (maximize)
def objective(params):
    x, y = params["x"], params["y"]
    return -(x**2 + y**2)  # Negative paraboloid, optimum at (0, 0)

# Define search space
search_space = {
    "x": np.arange(-5, 5, 0.1),
    "y": np.arange(-5, 5, 0.1),
}

# Run optimization
opt = HillClimbingOptimizer(search_space)
opt.search(objective, n_iter=1000)

# Results
print(f"Best score: {opt.best_score}")
print(f"Best params: {opt.best_para}")

Output:

Best score: -0.02
Best params: {'x': 0.1, 'y': 0.1}

Core Concepts

flowchart LR
    O["Optimizer
    ━━━━━━━━━━
    21 algorithms"]

    S["Search Space
    ━━━━━━━━━━━━
    NumPy arrays"]

    F["Objective
    ━━━━━━━━━━
    f(params) → score"]

    D[("Search Data
    ━━━━━━━━━━━
    history")]

    O -->|propose| S
    S -->|params| F
    F -->|score| O

    O -.-> D
    D -.->|warm start| O
Loading

Optimizer: Implements the search strategy. Choose from 21 algorithms across four categories: local search, global search, population-based, and sequential model-based.

Search Space: Defines valid parameter combinations as NumPy arrays. Each key is a parameter name, each value is an array of allowed values.

Objective Function: Your function to maximize. Takes a dictionary of parameters, returns a score. Use negation to minimize.

Search Data: Complete history of all evaluations accessible via opt.search_data for analysis and warm-starting future searches.


Examples

Hyperparameter Optimization
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_wine
import numpy as np

from gradient_free_optimizers import BayesianOptimizer

X, y = load_wine(return_X_y=True)

def objective(params):
    model = GradientBoostingClassifier(
        n_estimators=params["n_estimators"],
        max_depth=params["max_depth"],
        learning_rate=params["learning_rate"],
    )
    return cross_val_score(model, X, y, cv=5).mean()

search_space = {
    "n_estimators": np.arange(50, 300, 10),
    "max_depth": np.arange(2, 10),
    "learning_rate": np.logspace(-3, 0, 20),
}

opt = BayesianOptimizer(search_space)
opt.search(objective, n_iter=50)
Bayesian Optimization
import numpy as np
from gradient_free_optimizers import BayesianOptimizer

def ackley(params):
    x, y = params["x"], params["y"]
    return -(
        -20 * np.exp(-0.2 * np.sqrt(0.5 * (x**2 + y**2)))
        - np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))
        + np.e + 20
    )

search_space = {
    "x": np.arange(-5, 5, 0.01),
    "y": np.arange(-5, 5, 0.01),
}

opt = BayesianOptimizer(search_space)
opt.search(ackley, n_iter=100)
Particle Swarm Optimization
import numpy as np
from gradient_free_optimizers import ParticleSwarmOptimizer

def rastrigin(params):
    A = 10
    values = [params[f"x{i}"] for i in range(5)]
    return -sum(v**2 - A * np.cos(2 * np.pi * v) + A for v in values)

search_space = {f"x{i}": np.arange(-5.12, 5.12, 0.1) for i in range(5)}

opt = ParticleSwarmOptimizer(search_space, population=20)
opt.search(rastrigin, n_iter=500)
Simulated Annealing
import numpy as np
from gradient_free_optimizers import SimulatedAnnealingOptimizer

def sphere(params):
    return -(params["x"]**2 + params["y"]**2)

search_space = {
    "x": np.arange(-10, 10, 0.1),
    "y": np.arange(-10, 10, 0.1),
}

opt = SimulatedAnnealingOptimizer(
    search_space,
    start_temp=1.2,
    annealing_rate=0.99,
)
opt.search(sphere, n_iter=1000)
Constrained Optimization
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer

def objective(params):
    return params["x"] + params["y"]

def constraint(params):
    # Only positions where x + y < 5 are valid
    return params["x"] + params["y"] < 5

search_space = {
    "x": np.arange(0, 10, 0.1),
    "y": np.arange(0, 10, 0.1),
}

opt = RandomSearchOptimizer(search_space, constraints=[constraint])
opt.search(objective, n_iter=1000)

Memory and Warm Starting
import numpy as np
from gradient_free_optimizers import HillClimbingOptimizer

def expensive_function(params):
    # Simulating an expensive computation
    return -(params["x"]**2 + params["y"]**2)

search_space = {
    "x": np.arange(-10, 10, 0.1),
    "y": np.arange(-10, 10, 0.1),
}

# First search
opt1 = HillClimbingOptimizer(search_space)
opt1.search(expensive_function, n_iter=100, memory=True)

# Continue with warm start using previous search data
opt2 = HillClimbingOptimizer(search_space)
opt2.search(expensive_function, n_iter=100, memory_warm_start=opt1.search_data)
Early Stopping
import numpy as np
from gradient_free_optimizers import BayesianOptimizer

def objective(params):
    return -(params["x"]**2 + params["y"]**2)

search_space = {
    "x": np.arange(-10, 10, 0.1),
    "y": np.arange(-10, 10, 0.1),
}

opt = BayesianOptimizer(search_space)
opt.search(
    objective,
    n_iter=1000,
    max_time=60,           # Stop after 60 seconds
    max_score=-0.01,       # Stop when score reaches -0.01
    early_stopping={       # Stop if no improvement for 50 iterations
        "n_iter_no_change": 50,
    },
)

Ecosystem

This library is part of a suite of optimization tools. For updates on these packages, follow on GitHub.

Package Description
Hyperactive Hyperparameter optimization framework with experiment abstraction and ML integrations
Gradient-Free-Optimizers Core optimization algorithms for black-box function optimization
Surfaces Test functions and benchmark surfaces for optimization algorithm evaluation

Documentation

Resource Description
User Guide Comprehensive tutorials and explanations
API Reference Complete API documentation
Optimizers Detailed description of all 21 algorithms
Examples Code examples for various use cases

Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.


Citation

If you use this software in your research, please cite:

@software{gradient_free_optimizers,
  author = {Simon Blanke},
  title = {Gradient-Free-Optimizers: Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces},
  year = {2020},
  url = {https://github.com/SimonBlanke/Gradient-Free-Optimizers},
}

License

MIT License - Free for commercial and academic use.