TL;DR: VATEX is a novel method for referring image segmentation that leverages vision-aware text features to improve text understanding. By decomposing language cues into object and context understanding, the model can better localize objects and interpret complex sentences, leading to significant performance gains.
VATEX achieves state-of-the-art performance on multiple referring image segmentation benchmarks, demonstrating significant improvements over previous methods without requiring any external training data.
- CUDA 11.1
- Python 3.8
- PyTorch 1.9.0
For detailed setup instructions, refer to installation.md.
Follow the steps outlined in data.md to prepare the datasets.
- Download Pretrained ImageNet Models:
- Swin-B
- Swin-L
- Video-Swin-B
- Place models in the
weightsfolder.
To train VATEX using train_net_video.py, first set up the corresponding datasets as described in data.md, then execute:
python train_net_video.py --config-file <config-path> --num-gpus <?> OUTPUT_DIR <?>Where OUTPUT_DIR is the directory where the weights and logs will be stored. For example, train VATEX with Swin-B backbone with 2 GPU as
python train_net_video.py --config configs/refcoco/swin/swin_base.yaml --num-gpus 2 OUTPUT_DIR results/swin_baseTo resume training, simply add the flag --resume.
To evaluate a trained model, use the following command:
python train_net_video.py --config configs/refcoco/swin/swin_base.yaml --num-gpus 2 --eval-only OUTPUT_DIR ${OUTPUT_DIR} $ MODEL.WEIGHTS link_to_weights
As shown in the table, our method achieves remarkable performance improvements over state-of-the-art methods across all benchmarks on mIoU metrics. Notably, we surpass recent methods like CGFormer and VG-LAW by substantial margins: +1.23% and +3.11% on RefCOCO, +1.46% and +3.31% on RefCOCO+, and +2.16% and +4.37% on G-Ref validation splits respectively. The more complex the expressions, the greater the performance gains achieved by VATEX. Even compared to LISA, a large pre-trained vision-language model, VATEX consistently achieves an impressive 3-5% better performance across all datasets.
If you find VATEX useful for your research, please cite the following paper:
@inproceedings{nguyen2025visionaware,
title={Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding},
author={Nguyen, Truong and Others},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
year={2025},
url={https://openaccess.thecvf.com/content/WACV2025/html/Nguyen-Truong_Vision-Aware_Text_Features_in_Referring_Image_Segmentation_From_Object_Understanding_WACV_2025_paper.html}
}