Follow the steps below to set up the environment and run the inference demo.
Clone the repository:
git clone git@github.com:rayray9999/Genfocus.git
cd GenfocusEnvironment setup:
conda create -n Genfocus python=3.12
conda activate GenfocusInstall requirements:
pip install -r requirements.txtYou can download the pre-trained models using the following commands. Ensure you are in the Genfocus root directory.
# 1. Download main models to the root directory
wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/bokehNet.safetensors
wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/deblurNet.safetensors
# 2. Setup checkpoints directory and download auxiliary model
mkdir -p checkpoints
cd checkpoints
wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/checkpoints/depth_pro.pt
cd ..Launch the interactive web interface locally:
Note: This project uses FLUX.1-dev. You must request access and authenticate locally before running the demo.
⚠️ VRAM warning: GPU memory usage can be high. On an NVIDIA A6000, peak usage may reach ~45GB depending on resolution and settings.
Tip (Advanced Settings): If you want faster inference, try lowering num_inference_steps, resizing the input in Advanced Settings.
python demo.pyThe demo will be accessible at http://127.0.0.1:7860 in your browser.
You can also run inference directly via the command line for our different models.
Restore sharp details from blurry images.
⚠️ Update Notice: If you downloaded the weights before our recent inference command updates, please re-download the newdeblurNet.safetensorsto ensure best performance.
python Inference_deblurNet.py \
--input inference_example/Blurry_example.jpg \
--output Deblurred_output.png --input/-i: Path to the input blurry image.--output/-o: Path to save the output image.
Add realistic bokeh effects to an All-In-Focus (AIF) image using a depth/focus mask.
python Inference_bokehNet.py \
--input inference_example/AIF_example.png \
--mask inference_example/AIF_mask.png \
--depth inference_example/AIF_example_pred.npy \
--k_value 15 \
--output Bokeh_output.png --input/-i: Path to the All-In-Focus input image.--mask/-m: Path to the in-focus mask image.--point/-p: Focus pointx,yon the ORIGINAL image (e.g.,512,300).--depth/-d: Path to a pre-computed depth map (.npyfile). If not provided, Depth Pro is automatically used.--k_value/-k: Blur strength K.--output/-o: Path to save the output image.
A variant of DeblurNet that utilizes a pre-deblurring module for heavily degraded images.
Note: Please download the specific weight for this variant before running the inference:
wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/deblurNet_with_pre_deblur.safetensorspython Inference_deblurNet_with_pre_deblur.py \
--input inference_example/Blurry_example.jpg \
--pre_deblur_input inference_example/Blurry_example_pre_deblur.jpg \
--output Deblurred_output_with_pre_deblur.png --input/-i: Path to the input blurry image.--pre_deblur_input: Path to the pre-processed/pre-deblurred image.--output/-o: Path to save the output image.
| Argument | Type | Default | Description |
|---|---|---|---|
--disable_tiling |
Flag | False |
Force disable tiling (NO_TILED_DENOISE=True). Note: Tiling is auto-disabled if the shortest edge is < 512px. |
--steps |
Integer | 28 |
Number of inference steps. Higher steps usually yield better details but take longer. |
--long_side |
Integer | 0 |
Resize the longest edge of the image (aspect ratio preserved, padded to multiple of 16). 0 keeps original size. |
A ComfyUI implementation of Genfocus is available, thanks to Eric Rollei!
Check it out here: 👉 comfyui-refocus
We are actively working on improving this project. Current progress:
- Upload Model Weights
- Release HF Demo & Gradio Code (with tiling tricks for high-res images)
- Release Inference Code (Support for adjustable parameters/settings)
- Release Benchmark data
- Release Training Code and Data
If you find this project useful for your research, please consider citing:
@article{Genfocus2025,
title={Generative Refocusing: Flexible Defocus Control from a Single Image},
author={Tuan Mu, Chun-Wei and Huang, Jia-Bin and Liu, Yu-Lun},
journal={arXiv preprint arXiv:2512.16923},
year={2025}
}For any questions or suggestions, please open an issue or contact me at raytm9999.cs09@nycu.edu.tw.
Star 🌟 this repository if you like it!
