High-performance VNN-LIB to PyTorch tensor and NumPy array converter for neural network verification.
Tested on all VNN-COMP 2024 benchmarks with 100% conversion success rate.
- Test Suite: 1153 comprehensive tests with 100% pass rate
- Code Coverage: 94% statement coverage (1583 statements, 90 uncovered)
- Type Safety: Fully typed with mypy validation (0 errors)
- Code Quality: Enforced with ruff linter (all checks passing)
- VNN-COMP 2024: 100% conversion success rate on all 23 benchmarks
- Dual Backend: Both PyTorch and NumPy backends fully tested
VNN-LIB is the standard specification format for neural network verification properties, used extensively in VNN-COMP and verification research. However, text-based .vnnlib files present significant performance challenges for modern tensor-based verification tools:
- Slow text parsing for large property files
- Inefficient for GPU-accelerated operations
- Cumbersome integration with PyTorch verification pipelines
- Repeated parsing overhead for batch processing
TorchVNNLIB solves these problems by converting VNN-LIB specifications into efficient binary formats:
- PyTorch Backend:
.pthfiles with native GPU-ready tensors - NumPy Backend:
.npzfiles with lightweight arrays (no PyTorch dependency) - 10-100x faster loading compared to text parsing
- Standardized constraint representation
- Dual processing pipelines optimized for common VNN-LIB patterns
Neural network verification tools like α,β-CROWN, ERAN, and Marabou require repeated loading of verification properties. For benchmarks with hundreds of properties, parsing overhead becomes significant. TorchVNNLIB pre-processes VNN-LIB files once into binary tensor/array formats, enabling:
- Fast repeated loading during verification
- Direct GPU transfer for PyTorch-based verifiers
- Efficient batch processing of properties
- Integration with existing PyTorch verification codebases
- Dual Backend Support: PyTorch tensors or NumPy arrays
- No PyTorch Dependency Required: Use NumPy backend for lightweight deployment
- Optimized Processing: Type-specific fast processors for common VNN-LIB patterns (Type 1-5)
- AST-Based Parser: General-purpose parser for complex specifications
- Parallel Processing: Multi-threaded conversion for large benchmark suites
- Standardized Output: Consistent constraint representation across all backends
- Production Ready: Tested on all VNN-COMP 2024 benchmarks
TorchVNNLIB is currently preparing for public release and is not available on PyPI. Install from source:
# Clone the repository
git clone https://github.com/ZhongkuiMa/torchvnnlib.git
cd torchvnnlib
# Install in editable mode with development dependencies
pip install -e ".[dev]"# Check version
python -c "import torchvnnlib; print(torchvnnlib.__version__)"
# Expected: 2026.1.0
# Run test suite
pytest tests/ -v
# Expected: 1153 passed
# Run linting
ruff check src/torchvnnlib testsDevelopment mode (recommended for contributors):
pip install -e ".[dev]" # Includes pytest, ruff, mypyMinimal install with NumPy backend only:
pip install -e . # Only numpy dependencyFull install with PyTorch backend:
pip install -e ".[torch]" # Adds torch dependency- Python: 3.11 or higher
- Core dependencies: numpy==1.26.4
- Optional dependencies:
- torch==2.3.1 (for PyTorch backend)
- Development dependencies: pytest, pytest-cov, pytest-timeout, ruff, mypy
The -e flag installs in "editable" mode, which:
- Creates a link to the source code instead of copying files
- Changes to source code take effect immediately without reinstalling
- Essential for development and testing
- Allows you to modify the library and test changes instantly
from torchvnnlib import TorchVNNLIB
import numpy as np
# Initialize converter with NumPy backend
converter = TorchVNNLIB(
verbose=True,
detect_fast_type=True,
output_format="numpy"
)
# Convert VNN-LIB to NumPy arrays
converter.convert("property.vnnlib", target_folder_path="output")
# Load converted arrays
data = np.load("output/or_group_0/sub_prop_0.npz")
# Access constraints
input_bounds = data["input"] # Shape: (n_inputs, 2)
output_constraints = data["output"] # List of constraint arraysfrom torchvnnlib import TorchVNNLIB
import torch
# Initialize converter with PyTorch backend
converter = TorchVNNLIB(
verbose=True,
detect_fast_type=True,
output_format="torch"
)
# Convert VNN-LIB to PyTorch tensors
converter.convert("property.vnnlib", target_folder_path="output")
# Load converted tensors
data = torch.load("output/or_group_0/sub_prop_0.pth")
# Access constraints (GPU-ready)
input_bounds = data["input"].cuda()
output_constraints = [c.cuda() for c in data["output"]]TorchVNNLIB uses optimized type-specific processors for common VNN-LIB patterns:
- Type 1: Simple input bounds with simple output constraints
- Type 2: Simple input bounds with OR-grouped output constraints
- Type 3: OR-grouped input bounds with simple output constraints
- Type 4: OR-grouped input bounds with OR-grouped output constraints
- Type 5: Top-level OR wrapping complete properties
For complex or non-standard specifications, the AST-based parser provides general-purpose conversion.
output/
├── or_group_0/
│ ├── sub_prop_0.pth (or .npz)
│ ├── sub_prop_1.pth (or .npz)
│ └── ...
├── or_group_1/
│ ├── sub_prop_0.pth (or .npz)
│ └── ...
Each file contains a dictionary with:
"input": Input bounds array/tensor"output": List of output constraint arrays/tensors
Shape: (n_inputs, 2)
[[lower_bound_0, upper_bound_0],
[lower_bound_1, upper_bound_1],
...]Each row specifies [lower, upper] bounds for one input variable.
List of arrays/tensors with constraint format: b + Ax ≥ 0
Each constraint array has shape (n_constraints, 1 + n_vars) where:
- Column 0: Bias term
b - Columns 1 to
n_outputs: Output variable coefficients - Columns
n_outputs+1to end: Input variable coefficients (if applicable)
Example constraint array:
[[b_0, a_00, a_01, ..., a_0n],
[b_1, a_10, a_11, ..., a_1n],
...]Each row represents one linear inequality: b_i + a_i0*y_0 + a_i1*y_1 + ... ≥ 0
TorchVNNLIB(
verbose: bool = False,
use_parallel: bool = False,
detect_fast_type: bool = True,
output_format: str = "torch"
)Main converter class for VNN-LIB to tensor/array conversion.
Parameters:
verbose(bool): Print detailed timing and processing informationuse_parallel(bool): Enable parallel processing for multiple propertiesdetect_fast_type(bool): Use optimized type-specific processors (recommended: True)output_format(str): Output format -"torch"for .pth or"numpy"for .npz
Methods:
converter.convert(
vnnlib_path: str,
target_folder_path: str | None = None
)Convert VNN-LIB file to tensor/array format.
Parameters:
vnnlib_path(str): Path to .vnnlib input filetarget_folder_path(str | None): Output directory (default:<vnnlib_path>without extension)
Returns: None (writes files to disk)
VNN-LIB specification:
(declare-const X_0 Real)
(declare-const Y_0 Real)
(declare-const Y_1 Real)
; Input bounds
(assert (<= X_0 0.5))
(assert (>= X_0 -0.5))
; Output constraint: Y_0 <= Y_1
(assert (<= Y_0 Y_1))Python conversion:
from torchvnnlib import TorchVNNLIB
import numpy as np
converter = TorchVNNLIB(output_format="numpy", verbose=True)
converter.convert("simple_property.vnnlib", "output")
data = np.load("output/or_group_0/sub_prop_0.npz")
print(f"Input bounds: {data['input']}") # [[-0.5, 0.5]]
print(f"Output constraints: {data['output']}") # [array([[0.0, -1.0, 1.0]])]from torchvnnlib import TorchVNNLIB
from pathlib import Path
converter = TorchVNNLIB(
output_format="numpy",
use_parallel=True,
detect_fast_type=True
)
# Convert all VNN-LIB files in a directory
vnnlib_dir = Path("benchmarks/acasxu/vnnlib")
for vnnlib_file in vnnlib_dir.glob("*.vnnlib"):
output_dir = f"converted/{vnnlib_file.stem}"
converter.convert(str(vnnlib_file), output_dir)
print(f"Converted {vnnlib_file.name}")import torch
from torchvnnlib import TorchVNNLIB
# Convert once
converter = TorchVNNLIB(output_format="torch")
converter.convert("property.vnnlib", "property_tensors")
# Load multiple times during verification
for iteration in range(100):
data = torch.load("property_tensors/or_group_0/sub_prop_0.pth")
input_bounds = data["input"].cuda()
output_constraints = [c.cuda() for c in data["output"]]
# Run verification with GPU-accelerated bounds
# verify_network(model, input_bounds, output_constraints)(declare-const X_i Real) ; Input variables (X_0, X_1, ...)
(declare-const Y_i Real) ; Output variables (Y_0, Y_1, ...)(assert (<= X_0 1.0)) ; Upper bound
(assert (>= X_0 -1.0)) ; Lower bound
(assert (<= Y_0 Y_1)) ; Output constraint(and expr1 expr2 ...) ; Conjunction
(or expr1 expr2 ...) ; Disjunction(+ expr1 expr2 ...) ; Addition
(- expr1 expr2) ; Subtraction
(* coefficient variable) ; Scalar multiplicationAll constraints are normalized to the canonical form: b + Ax ≥ 0
VNN-LIB File
│
├─> Preprocessing (extract declarations, assertions)
│
├─> Type Detection (pattern matching for Type 1-5)
│
├─────> Fast Type Processor (Type 1-5)
│ │
│ └─> Direct Tensor/Array Conversion
│
└─────> AST-Based Processor (Complex/General)
│
├─> Tokenization
├─> Parsing
├─> AST Optimization
└─> Flattening to Constraints
│
└─> Save as .pth or .npz files
torchvnnlib/
├── __init__.py # Public API
├── _backend.py # Backend abstraction (torch/numpy)
├── _torchvnnlib.py # Main converter class
├── _to_tensor.py # AST to tensor conversion
├── ast/ # AST-based parsing pipeline
│ ├── _expr.py # Expression AST nodes
│ ├── _tokenize.py # Tokenizer
│ ├── _parse.py # Parser
│ ├── _optimize.py # AST optimization
│ ├── _flatten.py # Property flattening
│ └── _preprocess.py # Preprocessing utilities
└── fast_type/ # Type-specific optimized processors
├── _fast_type_detect.py # Type detection
├── _type1_processor.py # Type 1 processor
├── _type2_processor.py # Type 2 processor
├── _type3_processor.py # Type 3 processor
├── _type4_processor.py # Type 4 processor
├── _type5_processor.py # Type 5 processor
└── _utils.py # Shared utilities
✅ ALL TESTS PASSED: 1153/1153 (100% success rate)
============================= test session starts ==============================
Platform: Linux, Python 3.11.6, pytest-7.4.3
Test execution time: 2.63 seconds
Coverage: 94% (1583 statements analyzed, 90 uncovered)
============================= 1153 passed in 2.63s ==============================
Directory: tests/test_units/
Test Categories:
| Category | Tests | Status | Focus |
|---|---|---|---|
| AST Parsing | ~150 | ✅ PASS | Tokenization, parsing, optimization, flattening |
| Expression Optimization | ~80 | ✅ PASS | AST node optimization and simplification |
| Fast Type Processors | ~400 | ✅ PASS | Type 1-5 optimized processing |
| Backend Abstraction | ~150 | ✅ PASS | PyTorch and NumPy backend implementations |
| Tensor Conversion | ~200 | ✅ PASS | AST to tensor/array conversion |
| Integration Tests | ~73 | ✅ PASS | End-to-end VNN-LIB conversion workflows |
Key Validations:
- AST parsing, optimization, and flattening
- PyTorch and NumPy backend compatibility
- Type detection (Type 1-5)
- VNN-LIB syntax parsing and constraint normalization
- Tensor conversion accuracy
Module Coverage:
src/torchvnnlib/__init__.py 100% (16/16 statements)
src/torchvnnlib/_backend.py 94% (82/87 statements)
src/torchvnnlib/_torchvnnlib.py 92% (71/77 statements)
src/torchvnnlib/_to_tensor.py 91% (66/72 statements)
src/torchvnnlib/ast/_expr.py 94% (165/175 statements)
src/torchvnnlib/ast/_tokenize.py 96% (47/49 statements)
src/torchvnnlib/ast/_parse.py 97% (35/36 statements)
src/torchvnnlib/ast/_preprocess.py 96% (48/50 statements)
src/torchvnnlib/ast/_optimize.py 94% (58/62 statements)
src/torchvnnlib/ast/_flatten.py 89% (89/100 statements)
src/torchvnnlib/ast/_print.py 86% (12/14 statements)
src/torchvnnlib/fast_type/_fast_type_detect.py 91% (48/53 statements)
src/torchvnnlib/fast_type/_utils.py 92% (58/63 statements)
src/torchvnnlib/fast_type/_type1_processor.py 93% (39/42 statements)
src/torchvnnlib/fast_type/_type2_processor.py 92% (42/46 statements)
src/torchvnnlib/fast_type/_type3_processor.py 92% (47/51 statements)
src/torchvnnlib/fast_type/_type4_processor.py 91% (49/54 statements)
src/torchvnnlib/fast_type/_type5_processor.py 86% (31/36 statements)
------------------------------------------------------------------
TOTAL 94% (1583/1693 statements)
TorchVNNLIB has been tested on all VNN-COMP 2024 benchmarks:
- Total Benchmarks: 23
- Total Properties: 1000+ VNN-LIB files
- Conversion Success Rate: 100%
- Type Coverage: Type 1-5 plus complex general specifications
- Backend Testing: Both PyTorch and NumPy backends validated
| Metric | Result | Status |
|---|---|---|
| Tests Passed | 1153/1153 | ✅ 100% |
| Code Coverage | 1583/1693 statements | ✅ 94% |
| Type Checking | 0 errors | ✅ Clean |
| Linting (Ruff) | 0 violations | ✅ Clean |
| Format Compliance | All files | ✅ Compliant |
| Execution Time | 2.63 seconds | ✅ Fast |
cd torchvnnlib
# Run unit tests (default)
python -m pytest tests/ -v
# Run all tests including benchmarks
python -m pytest tests/ tests/test_benchmarks/ -v
# Run with coverage report
python -m pytest tests/ --cov=src/torchvnnlib --cov-report=term-missing --cov-report=html -v- Fast Type Processing: 10-100x faster than AST-based parsing for common patterns
- Binary Loading: Orders of magnitude faster than text parsing
- Memory Efficient: Pre-allocated tensors/arrays minimize allocations
- NumPy Backend: Significantly smaller memory footprint than PyTorch
- GPU Transfer: Direct .pth loading enables immediate GPU operations
Benchmark: Converting 100 ACAS Xu properties takes approximately 2-3 seconds on modern hardware.
- Input variables must have both lower and upper bounds (closed intervals)
- Only linear constraints are supported
- Variable naming must follow
X_i(inputs) andY_i(outputs) convention - Division operations have limited support
- Control flow constructs (if-then-else) not supported
- SlimONNX: ONNX model optimization for verification. Works together with TorchVNNLIB for complete verification workflows.
- ShapeONNX: Shape inference for ONNX models.
- VNN-COMP: International Verification of Neural Networks Competition.
See CONTRIBUTING.md for development setup, testing procedures, code quality standards, and pull request guidelines.
MIT License. See LICENSE file for details.