IMPACT Reg is a novel, task-agnostic similarity metric designed for multimodal medical image registration. Instead of relying on intensity based metric, handcrafted descriptors or training task-specific models, IMPACT reuses powerful segmentation foundation models (e.g., TotalSegmentator, SAM) as generic feature extractors. These deep features are used to define a semantic similarity loss, optimized directly in registration frameworks like Elastix or VoxelMorph.
๐ Reference
๐ IMPACT : A Generic Semantic Loss for Multimodal Image Registration Valentin Boussot, Cรฉdric Hรฉmon, Jean-Claude Nunes, Jason Dowling, Simon Rouzรฉ, Caroline Lafond, Anaรฏs Barateau, Jean-Louis Dillenseger arXiv:2503.24121 โ Under review
-
Generic, Training-free
No need for task-specific training, IMPACT reuses powerful representations from large-scale pretrained segmentation models. -
Flexible model integration
Compatible with TorchScript 2D/3D models (e.g., TotalSegmentator, SAM2.1, MIND), supporting multi-layer and multi-model fusion, multi-resolution setups, and fully open to experimentation with custom architectures and configurations. -
Jacobian vs Static optimization modes
Choose between fully differentiable Jacobian mode (for downsampling models) and fast inference-only Static mode, depending on model type and computation time constraints. -
Robust across modalities
Handles complex multimodal scenarios (CT/CBCT, MR/CT) using a unified semantic loss robust to intensity variations. -
Benchmark-proven Ranked in the top participants of multiple Learn2Reg challenges, showing state-of-the-art performance across diverse tasks (thorax, abdomen, pelvis, CT/CBCT/MRI).
-
Seamless integration with Elastix
Natively implemented as a standard Elastix metric, IMPACT inherits all the strengths of classical registration: multi-resolution strategies, mask support, sparce deformation models, and full reproducibility. It also handles images of different sizes, resolutions, and fields of view, making it ideal for real-world clinical datasets with heterogeneous inputs. -
Efficient runtime for standard registration tasks
- ~150 seconds (Static mode)
- ~300 seconds (Jacobian mode)
-
Docker-ready for quick deployment
Run out of the box with a single Docker command, no need to install dependencies manually.
IMPACT has demonstrated strong generalization performance across multiple tasks without training. ๐ Learn2Reg Challenge
| Challenge | Task | Rank |
|---|---|---|
| Learn2Reg 2021 | CT Lung Registration | ๐ฅ 3rd |
| Learn2Reg 2023 | Thorax CBCT | ๐ฅ Top-6 |
| Learn2Reg 2023 | Abdomen MRโCT | ๐ฅ 2nd |
Beyond its performance, IMPACT is designed as a modular platform that facilitates systematic experimentation with pretrained models, feature layers, and distance functions. This flexibility enables researchers to explore various feature extraction methods, fostering innovation and adaptability in multimodal image registration tasks.
Model performance depends on both the feature extraction strategy and the choice of extractor models.
The following configurations were found to be optimal in the IMPACT study:
| ๐งช Scenario | ๐ง Optimal Configuration | ๐ก Rationale |
|---|---|---|
| CT/CBCT | Early feature layers (2-Layers) + Jacobian mode | Early layers of segmentation networks tend to denoise and enhance anatomical structures across modalities, improving geometric alignment and robustness to artifacts. |
| MR/CT | High-level feature layer (7-Layers) + Static mode + MIND | Registration behaves more like contour-based, segmentation-driven alignment; MIND complements it by capturing intra-organ, leading to better anatomical consistency. |
| Model | Type | Typical Use | Comment |
|---|---|---|---|
| TS/M730 | MR and CT (3D) | ๐น Default baseline | Most stable and general-purpose model |
| SAM2.1 | Foundation (2D) | โก Fast evaluation | Good generalization; suitable for quick or exploratory 2D experiments. |
| M258 | CT (3D, Lung vessels) | ๐ฏ Organ-specific | Models trained on the target anatomical structure (e.g., lung or vessels) provide better local alignment in the corresponding regions. |
| MIND | Handcrafted descriptor | ๐งฉ Cross-modality | Complements contour-based methods by recovering intra-organ information, enhancing MR/CT alignment. |
โข CT/CBCT โ Early layers + Jacobian โ enhance structure visibility while reducing noise and artifacts. โข MR/CT โ High-level layers + Static + MIND โ emphasize anatomical contours and intra-organ consistency.
โข UseTS/M730_2_Layersas the default model, and organ-specific models (e.g.,M258) for targeted anatomical regions.
The easiest way to test IMPACT is to use the prebuilt Docker image from Docker Hub:
docker pull vboussot/elastix_impactThen, run Elastix with your own data:
docker run --rm --gpus all \
-v "./Data:/Data" \
-v "./Out:/Out" \
vboussot/elastix_impactMake sure that the Data/ folder contains:
Fixed_image.mha,Moving_image.mhayour input images to be registered. These files can be provided in either .mha or .nii.gz format.ParameterMap.txtusing Impact configuration. ๐ SeeParameterMaps/README.mdfor detailed configuration examples.- A
Data/Models/directory with TorchScript models. ๐ SeeData/Models/README.mdfor model download instructions.
See Docker/README.md for full details and usage examples.
If you want to build the image yourself:
git clone https://github.com/vboussot/ImpactLoss.git
cd ImpactLossBuild the Docker image
docker build -t elastix_impact DockerPrecompiled Elastix + IMPACT binaries are available for Linux, Windows, and macOS (CPU and CUDA variants) in the ImpactElastix release.
You can choose between two installation methods:
- Direct download โ manually download the binaries from the release.
- Automatic installation (recommended) โ use the provided installer
install.py.
The installer automatically:
- detects your operating system and GPU,
- selects the appropriate CPU or CUDA (12.8) binaries,
- downloads the correct Elastix release,
- downloads LibTorch 2.80 with CUDA 12.8 or cpu when required,
- ensures all shared libraries are visible at runtime.
- Minimum NVIDIA driver
- Linux โฅ 570.26
- Windows โฅ 570.65
- The CUDA Toolkit is not required (driver only).
Build Elastix with IMPACT support directly on your machine.
Download and extract the C++ distribution of LibTorch (with or without CUDA) from the official website:
๐ https://pytorch.org/
git clone https://github.com/InsightSoftwareConsortium/ITK.git
mkdir ITK-build ITK-install
cd ITK-build
cmake -DCMAKE_INSTALL_PREFIX=../ITK-install ../ITK
make install
cd ..- Clone the ImpactElastix repository:
git clone https://github.com/vboussot/ImpactElastix.git- Create build and install directories:
mkdir ImpactElastix-build ImpactElastix-install
cd ImpactElastix-build- Configure the build with CMake:
cmake -DTorch_DIR=../libtorch/share/cmake/Torch/ \
-DITK_DIR=../ITK-install/lib/cmake/ITK-6.0/ \
-DCMAKE_INSTALL_PREFIX=../ImpactElastix-install \
-DUSE_ImpactMetric=ON \
../ImpactElastixTorch_DIR: path to the CMake config directory of LibTorch (usually insidelibtorch/share/cmake/Torch/)ITK_DIR: path to the CMake config directory of ITK, typically inside your ITK install folder (e.g.,ITK-install/lib/cmake/ITK-*)
- Build and install Elastix with IMPACT:
make installThe final binaries will be located in:
../ImpactElastix-install/bin/elastix
Before running elastix, make sure the required shared libraries are accessible at runtime by setting the LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=lib/libtorch/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=ImpactElastix-install/lib:$LD_LIBRARY_PATHYou can then run:
../ImpactElastix-install/bin/elastixTo use IMPACT, start by downloading the pretrained TorchScript models.
๐ See Data/Models/README.md for download instructions.
Elastix is executed as usual, using a parameter map configured to use the IMPACT metric.
๐ Refer to ParameterMaps/README.md for detailed configuration examples.
Input images must not be preprocessed for intensity normalization before being passed to IMPACT.
All images must be provided in their raw, native intensity space (e.g., HU for CT, native values for MRI).
Each model is responsible for applying the same normalization strategy that was used during its training,
directly inside its forward method.
Complete example of how to run registration with IMPACT is provided in:
๐ run_impact_example.py
You can also use IMPACT directly as a PyTorch loss module.
The implementation is available in IMPACT.py.
from IMPACT import IMPACT
import torch
# Instantiate the IMPACT loss
loss_fn = IMPACT(
model_name="TS/M730_2_Layers", # TorchScript model on Hugging Face
shape=[0, 0, 0], # [H, W, D] for explicit size, or [0, 0, 0] to disable resampling
in_channels=1, # Number of input channels
weights=[1, 1] # One weight per output layer
)
# Example 3D tensors
A = torch.rand(1, 1, 128, 128, 128)
B = torch.rand(1, 1, 128, 128, 128)
# Compute similarity loss
loss = loss_fn(A, B)
print(loss)Automatically downloads TorchScript models from Hugging Face
The available model names follow the same folder hierarchy as in the Hugging Face repository, e.g.:
TS/M730.ptSAM2.1/SAM2.1_Tiny.ptMIND/R2D2_2D.pt
๐ Cached under ~/.IMPACT/models/
โ๏ธ Handles resizing and channel replication
๐งฎ Computes a weighted L1 semantic loss between deep feature maps
