This codebase is the official implementation of Learning Null Geodesics for Gravitational Lensing Rendering in General Relativity published in ICCV 2025.
GravLensX replaces expensive iterative geodesic integration with trained neural networks that can predict any point along a light ray in a single forward pass.
The pipeline has three main stages:
- Data Generation: sample, calculate and save null geodesics under the superposed Kerr metric.
- Training: Train Physics‑Informed Neural Networks (PINNs) for each near field and the far field.
- Neural Rendering: Render the image using the trained PINNs.
An Euler‑based baseline (euler_render.py) is also included.
- Clone the repository:
git clone https://github.com/NEU-REAL/GravLensX
- Our code is implemented in Python 3.10 with PyTorch 2.2.1 and Taichi 1.7.2. You can either set up the environment on your own or use the provided conda environment file as follows:
conda env create -f environment.yaml
- Download the texture files and unzip the
texturefolder into the project directory.
First, specify your dataset directory in dataset.yaml in which the calculated geodesic data will be saved. For example:
dataset_dir: /home/dataset/BH-Space
Next, run the data generation script:
python generate_geodesic_data.pyKey parameters
black_hole_positions(N, 3): The positions of black holes.black_hole_masses(N): The masses of black holes.black_hole_spins(N): The spin angular momenta of black holes.near_field_radius: The radius of the near field.sample_min_radius: The minimum radius (w.r.t. the black hole center or the world center) for sampling the ray origins.sample_max_radius: The maximum radius (w.r.t. the black hole center or the world center) for sampling the ray origins.min_radius: The radius (w.r.t. the black hole center) for judging whether a ray is ended (into the near field or into the black hole).max_radius: The radius (w.r.t. black hole center or the world center) for judging whether a ray is ended (into the far field or reaching the sky sphere).data_length: The number of samples to generate for each near field and far field.near_field_radius: The radius of the near field.
Near‑Field (one process&GPU per BH)
Before the training, please align the following parameters in near_field_train_distributed.py:
--black_hole_radius: aligned tonear_field_radiusingenerate_geodesic_data.py.black_hole_positions: aligned toblack_hole_positionsingenerate_geodesic_data.py.--data_length: aligned todata_lengthingenerate_geodesic_data.py.
Then, run the training script:
python near_field_train_distributed.pyFar‑Field (One process with one or multiple GPU(s))
Ensure --num_bh is correctly set to the number of black holes in your dataset, and data_length is aligned to data_length in generate_geodesic_data.py.
Then, run the training script:
python far_field_train_distributed.pyMake sure the checkpoints saved from training are correctly loaded. Ensure the following parameters in neural_render.py are aligned with your dataset:
pos_bh: aligned toblack_hole_positionsingenerate_geodesic_data.py.mass_bh: aligned toblack_hole_massesingenerate_geodesic_data.py.
Then run the rendering script:
python neural_render.pyKey Parameters
-
image_width&image_height: The size of the output image. -
look_from: The camera position. -
look_at: The point that the camera is looking at. -
--min_radius:$l^{in}$ in our paper. -
--near_bh_distance: in-black-hole radius in our paper, the radius for judging whether a ray falls into the black hole. -
--loose_boundary:$\epsilon$ in our paper.
python euler_render.pyThe key parameters are similar to those in neural_render.py.
We would like to thank Taichi for their powerful graphics framework and BlackHoleRayMarching for the open source code. Their work served as both an important foundation and an inspiration for this project.
If you find this code useful, please consider citing our work:
@inproceedings{sun2025learning,
title={Learning Null Geodesics for Gravitational Lensing Rendering in General Relativity},
author={Sun, Mingyuan and Fang, Zheng and Wang, Jiaxu and Zhang, Kunyi and Zhang, Qiang and Xu, Renjing},
booktitle={2025 IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2025},
organization={IEEE}
}