Skip to content

nero1342/VATEX

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding

Paper arXiv Project Page

empty
TL;DR: VATEX is a novel method for referring image segmentation that leverages vision-aware text features to improve text understanding. By decomposing language cues into object and context understanding, the model can better localize objects and interpret complex sentences, leading to significant performance gains.

🏆 State-of-the-Art Performance

VATEX achieves state-of-the-art performance on multiple referring image segmentation benchmarks, demonstrating significant improvements over previous methods without requiring any external training data.

🎯 RefCOCO

PWC PWC PWC

🎯 RefCOCO+

PWC PWC PWC

🎯 G-Ref

PWC

🎯 Additional Benchmarks

PWC PWC


🛠️ Requirements & Setup

🖥️ System Requirements

  • CUDA 11.1
  • Python 3.8
  • PyTorch 1.9.0

📥 Installation

For detailed setup instructions, refer to installation.md.

🗂️ Data Preparation

Follow the steps outlined in data.md to prepare the datasets.

🚀 Getting Started

  1. Download Pretrained ImageNet Models:
    • Swin-B
    • Swin-L
    • Video-Swin-B
  2. Place models in the weights folder.

🏋️‍♂️ Training

To train VATEX using train_net_video.py, first set up the corresponding datasets as described in data.md, then execute:

python train_net_video.py --config-file <config-path> --num-gpus <?> OUTPUT_DIR <?>

Where OUTPUT_DIR is the directory where the weights and logs will be stored. For example, train VATEX with Swin-B backbone with 2 GPU as

python train_net_video.py --config configs/refcoco/swin/swin_base.yaml --num-gpus 2 OUTPUT_DIR results/swin_base

To resume training, simply add the flag --resume.

📈 Evaluation

To evaluate a trained model, use the following command:

python train_net_video.py --config configs/refcoco/swin/swin_base.yaml --num-gpus 2 --eval-only OUTPUT_DIR ${OUTPUT_DIR} $ MODEL.WEIGHTS link_to_weights

📊 Main Results

empty


As shown in the table, our method achieves remarkable performance improvements over state-of-the-art methods across all benchmarks on mIoU metrics. Notably, we surpass recent methods like CGFormer and VG-LAW by substantial margins: +1.23% and +3.11% on RefCOCO, +1.46% and +3.31% on RefCOCO+, and +2.16% and +4.37% on G-Ref validation splits respectively. The more complex the expressions, the greater the performance gains achieved by VATEX. Even compared to LISA, a large pre-trained vision-language model, VATEX consistently achieves an impressive 3-5% better performance across all datasets.

📚 Citing VATEX

If you find VATEX useful for your research, please cite the following paper:

@inproceedings{nguyen2025visionaware,
  title={Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding},
  author={Nguyen, Truong and Others},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
  year={2025},
  url={https://openaccess.thecvf.com/content/WACV2025/html/Nguyen-Truong_Vision-Aware_Text_Features_in_Referring_Image_Segmentation_From_Object_Understanding_WACV_2025_paper.html}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages