- Checkpoints for Sub-1 and Sub2
- Generation results of SynBrain for all subjects
- Data for quick evaluation
Download files from HuggingFace SynBrain.
- Create conda environment using environment.yaml in the main directory by entering
conda env create -f requirements.yml. It is an extensive environment and may include redundant libraries. You may also create environment by checking requirements yourself.
ps. You need to set your own path to run the code.
- Download NSD data from NSD AWS Server;
- Download "COCO_73k_annots_curated.npy" file from HuggingFace NSD;
- Prepare visual stimuli and fMRI data;
cd data python download_nsddata.py python prepare_nsddata_sclae.py -sub x - Extract CLIP image embedding by running
extract_features_sdxl_unclip.ipynb
Run src/vae/train_vae.py for BrainVAE training
ps. Change sys.path/save_path/data_path to run the code correctly.
Run src/s2n/run_sit_os.sh for subject-specific S2N mapper training.
Run src/s2n/run_sit_os_ft.sh for subject-adaptive S2N mapper training.
Run src/s2n/generate.py for Visual-to-fMRI synthesis.
Run src/s2n/eval.py for Voxel-level and Semantic-level evaluation.
@article{mai2025synbrain,
title={SynBrain: Enhancing Visual-to-fMRI Synthesis via Probabilistic Representation Learning},
author={Mai, Weijian and Wu, Jiamin and Zhu, Yu and Yao, Zhouheng and Zhou, Dongzhan and Luo, Andrew F and Zheng, Qihao and Ouyang, Wanli and Song, Chunfeng},
journal={arXiv preprint arXiv:2508.10298},
year={2025}
}
