This repository contains the official codes for Continuous-time Object Segmentation using High Temporal Resolution Event Camera.
- torch >= 1.8.0
- torchvison >= 0.9.0
- ...
To installl requirements, run:
conda create -n ECOSNet python==3.7
pip install -r requirements.txtDownload the EOS dataset, then organize data as following format:
EventData
|----00001
| |-----e2vid_images
| |-----event_5
| |-----event_image
| |-----event_label_format
| |-----event_ori
| |-----rgb_image
|----00002
| |-----e2vid_images
| |-----event_5
| |-----event_image
| |-----event_label_format
| |-----event_ori
| |-----rgb_image
|----...
|----data.json
|----event_train.txt
|----event_test.txt
|----event_camera_test.txt
|----event_object_test.txt
Where e2vid_images contains the reconstruction image using E2vid, event_5 contains the voxel with 5 bins, event_image contains event composition image, event_label_format contains the object mask, event_ori contains the original event stream, rgb_image contains the rgb modality images for each video.
This dataset is based on DAVIS17, we use v2e to generate event stream. Download the DAVIS_Event dataset, then organize data as following format:
davis_event
|----bear
| |-----e2vid_images
| |-----event_5
| |-----event_image
| |-----event_label_format
| |-----event_ori
| |-----rgb_image
|----bike-packing
| |-----e2vid_images
| |-----event_5
| |-----event_image
| |-----event_label_format
| |-----event_ori
| |-----rgb_image
|----...
|----data.json
|----event_train.txt
|----event_test.txt
To train the ECOSNet on EOS or DAVIS_Event dataset, just modify the dataset root $cfg.DATA.ROOT in config.py, then run following command.
python train.py --gpu ${GPU-IDS} --exp_name ${experiment}Download the model pretrained checkpoint on EOS dataset or checkpoint on DAVIS_Event dataset.
To eval the ECOSNet network on EOS or DAVIS_Event Dataset, modify $cfg.DATA.ROOT, then run following command
python inference.py --checkpoint ${./checkpoint/ECOS.pth} --results ${./results/EOS}The results will be saved as indexed png file at ${results}/.
Additionally, you can modify some setting parameters in config.py to change configuration.
This codebase is built upon official TransVOS repository.