Skip to content

This is an official codes for Detaching and Boosting: Dual Engine for Scale-Invariant Self-Supervised Monocular Depth Estimation - RAL 2022 & ICRA 2023

Notifications You must be signed in to change notification settings

AttackonMuggle/DaB_NET0

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 

Repository files navigation

DABnet:Detaching and Boosting: Dual Engine for Scale-Invariant Self-Supervised Monocular Depth Estimation

Paddle | PyTorch | Paper

This is the code for monocular self-supervised depth estimation model using the method described in

**Detaching and Boosting: Dual Engine for Scale-Invariant Self-Supervised Monocular Depth Estimation arxiv

Peizhe Jiang , Wei Yang , Xiaoqing Ye , Xiao Tan , and Meng Wu**

If you find our work useful, please consider citing our paper:

@article{jiang2022detaching,
  title={Detaching and Boosting: Dual Engine for Scale-Invariant Self-Supervised Monocular Depth Estimation},
  author={Jiang, Peizhe and Yang, Wei and Ye, Xiaoqing and Tan, Xiao and Wu, Meng},
  journal={IEEE Robotics and Automation Letters},
  volume={7},
  number={4},
  pages={12094--12101},
  year={2022},
  publisher={IEEE}
}

Abstract

Monocular depth estimation (MDE) in the self-supervised scenario has emerged as a promising method as it refrains from the requirement of ground truth depth. Despite continuous efforts, MDE is still sensitive to scale changes especially when all the training samples are from one single camera. Meanwhile, it deteriorates further since camera movement results in heavy coupling between the predicted depth and the scale change. In this paper, we present a scale-invariant approach for self-supervised MDE, in which scale-sensitive features (SSFs) are detached away while scale-invariant features (SIFs) are boosted further. To be specific, a simple but effective data augmentation by imitating camera zooming process is proposed to detach SSFs, making the model robust to scale changes. Besides, a dynamic cross-attention module is designed to boost SIFs by fusing multi-scale cross-attention features adaptively. Extensive experiments on the KITTI dataset demonstrate that the detaching and boosting strategies are mutually complementary in MDE and our approach achieves new State-of-The-Art performance against existing works from 0.097 to 0.090 w.r.t absolute relative error. The code will be made public soon.

Version

We offer two versions of code base: the Paddle version Paddle and the Pytorch version Pytorch , please refer to the specific folders to find the detailed instructments.

Test for example

About

This is an official codes for Detaching and Boosting: Dual Engine for Scale-Invariant Self-Supervised Monocular Depth Estimation - RAL 2022 & ICRA 2023

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages