A PyTorch implementation of the NeurIPS 2024 paper:
π Controlling Continuous Relaxation for Combinatorial Optimization
Unsupervised Learning (UL)-based solvers for Combinatorial Optimization (CO) train neural networks to generate soft solutions by directly optimizing the CO objective using continuous relaxation strategies. While these solvers offer advantages over traditional methods, they suffer from:
1οΈβ£ Optimization Issues β Getting trapped in local optima π
2οΈβ£ Rounding Issues β Artificial rounding from continuous to discrete spaces weakens robustness
To overcome these, we propose Continuous Relaxation Annealing (CRA) β a rounding-free learning method for UL-based solvers. CRA dynamically adjusts a penalty term, transitioning from smoothing non-convexity to enforcing discreteness, eliminating artificial rounding. π
π‘ Key Benefits:
β
Significantly boosts UL-based solver performance
β
Outperforms existing UL-based methods & greedy algorithms
β
Eliminates artificial rounding
β
Accelerates the learning process π
This package was implemented with Python 3.11.11. To install dependencies, run:
pip install -r requirements.txtβ
dgl β 2.1.0
β
torch β 2.4.0
β
numpy β 1.26.4
β
pandas β 2.2.2
β
matplotlib β 3.10.0
β
seaborn β 0.13.2
β
scikit-learn β 1.6.1
β
networkx β 3.4.2
β
tqdm β 4.67.1
This project is licensed under the BSD 3-Clause License. See LICENSE for details.
import random
import os
import copy
from collections import OrderedDict, defaultdict
from itertools import chain, islice, combinations
from time import time
from tqdm import tqdm
import dgl
import torch
import numpy as np
import networkx as nx
from src import utils, gnn, instance
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Fix seed for reproducibility
SEED = 0
utils.fix_seed(SEED)
torch_type = torch.float32N, d, p, graph_type = 5000, 20, None, "reg"
nx_graph = nx.random_regular_graph(d=d, n=N, seed=SEED)
dgl_graph = dgl.from_networkx(nx_graph).to(device)
Q_mat = utils.qubo_dict_to_torch(nx_graph, instance.gen_q_dict_mis_sym(nx_graph, penalty=2)).to(device)in_feats = int(dgl_graph.number_of_nodes()**(0.5))
hidden_size = in_feats
num_class = 1
dropout = 0.0
model = gnn.GCN_dev(in_feats, hidden_size, num_class, dropout, device).to(device)
embedding = nn.Embedding(dgl_graph.number_of_nodes(), in_feats).type(torch_type).to(device)def loss(probs, reg_param, curve_rate=2):
probs_ = torch.unsqueeze(probs, 1)
cost = (probs_.T @ Q_mat @ probs_).squeeze()
reg_term = torch.sum(1 - (2 * probs_ - 1) ** curve_rate)
return cost + reg_param * reg_term, cost, reg_termnum_epoch = int(1e5)
lr = 1e-4
weight_decay = 1e-2
tol = 1e-4
patience = 1000
check_interval = 1000
curve_rate = 2
model, bit_string_PI, cost, reg_term, runtime = gnn.fit_model(
model, dgl_graph, embedding, loss,
num_epoch=num_epoch, lr=lr, weight_decay=weight_decay,
tol=tol, patience=patience, device=device,
annealing=False, init_reg_param=0,
annealing_rate=0, check_interval=check_interval, curve_rate=curve_rate
)init_reg_param = -20
annealing_rate = 1e-3
model, bit_string_CRA, cost, reg_term, runtime = gnn.fit_model(
model, dgl_graph, embedding, loss,
num_epoch=num_epoch, lr=lr, weight_decay=weight_decay,
tol=tol, patience=patience, device=device,
annealing=True, init_reg_param=init_reg_param,
annealing_rate=annealing_rate, check_interval=check_interval,
curve_rate=curve_rate
)size_mis_CRA, _, number_violation = utils.postprocess_gnn_mis(bit_string_CRA, nx_graph)
size_mis_PI, _, number_violation = utils.postprocess_gnn_mis(bit_string_PI, nx_graph)
print(f"Independent set size: (CRA) {size_mis_CRA.item()}, (PI) {size_mis_PI.item()}")Expected Output:
Independent set size: (CRA) 853, (PI) 0
If you use this work, please cite:
@inproceedings{
ichikawa2024controlling,
title={Controlling Continuous Relaxation for Combinatorial Optimization},
author={Yuma Ichikawa},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ykACV1IhjD}
}Now you're ready to experiment with Continuous Relaxation Annealing! Happy Researching!