PlaneRecTR++: Unified Query Learning for Joint 3D Planar Reconstruction and Pose Estimation
[paper]
Please refer to PlaneRecTR.
Please download the required sparse-view datasets from NOPE-SAC.
Then set the location for builtin datasets by export DETECTRON2_DATASETS=YOUR_DATASETS_FOLDER/, and detectron2 will look for datasets in the following directory structure:
YOUR_DATASETS_FOLDER/
├── scannetv2_multiview/
└── mp3d/For MP3D single-view evaluation, please additionally prepare mp3d_single_json, which removes duplicated images appearing in image pairs.
ScanNetv2
python sparseviews_train_net.py --num-gpus 1 --config-file configs/SparseViews/scannetv2_size480.yaml DATASETS.ROOT_DIR YOUR_DATASETS_FOLDER/scannetv2_multiview/ OUTPUT_DIR ./output_train_singleview_scannetv2MP3D
python sparseviews_train_net.py --num-gpus 1 --config-file configs/SparseViews_mp3d/mp3d_size480.yaml DATASETS.ROOT_DIR YOUR_DATASETS_FOLDER/mp3d/ OUTPUT_DIR ./output_train_singleview_mp3dYou can downlaod our trained models from here.
ScanNetv2
python sparseviews_train_net.py --eval-only --num-gpus 1 --config-file configs/SparseViews/scannetv2_size480.yaml DATASETS.ROOT_DIR YOUR_DATASETS_FOLDER/scannetv2_multiview/ OUTPUT_DIR ./output_eval_singleview_scannetv2 MODEL.WEIGHTS output_train_singleview_scannetv2/model_final.pthMP3D
For MP3D single-view evaluation, use
mp3d_single_size480.yaml.
python sparseviews_train_net.py --eval-only --num-gpus 1 --config-file configs/SparseViews_mp3d/mp3d_single_size480.yaml DATASETS.ROOT_DIR YOUR_DATASETS_FOLDER/mp3d/ OUTPUT_DIR ./output_eval_singleview_mp3d MODEL.WEIGHTS output_train_singleview_mp3d/model_final.pthScanNetv2
python sparseviews_train_net.py --num-gpus 1 --config-file configs/SparseViews/scannetv2_pose_phase_size480.yaml DATASETS.ROOT_DIR YOUR_DATASETS_FOLDER/scannetv2_multiview/ OUTPUT_DIR ./output_train_sparseview_scannetv2 MODEL.WEIGHTS output_train_singleview_scannetv2/model_final.pthMP3D
python sparseviews_train_net.py --num-gpus 1 --config-file configs/SparseViews_mp3d/mp3d_pose_phase_size480.yaml DATASETS.ROOT_DIR YOUR_DATASETS_FOLDER/mp3d/ OUTPUT_DIR ./output_train_sparseview_mp3d MODEL.WEIGHTS output_train_singleview_mp3d/model_final.pthYou can downlaod our trained models from here.
ScanNetv2
python sparseviews_train_net.py --eval-only --num-gpus 1 --config-file configs/SparseViews/scannetv2_pose_phase_size480.yaml DATASETS.ROOT_DIR YOUR_DATASETS_FOLDER/scannetv2_multiview/ MODEL.MASK_FORMER.TEST.SAVE_OUTPUT True OUTPUT_DIR ./output_eval_sparseview_scannetv2 TEST.VIS_PERIOD 500 MODEL.WEIGHTS output_train_sparseview_scannetv2/model_final.pthset MODEL.MASK_FORMER.TEST.SAVE_OUTPUT=True to save PlaneRecTRpp_outputs.pkl for evaluations.
python eval.py --config-file configs/SparseViews/scannetv2_pose_phase_size480.yaml --rcnn-cached-file output_eval_sparseview_scannetv2/inference/PlaneRecTRpp_outputs.pkl --dataset-phase sparseviews_scannetv2_plane_test --match-threshold 0.17 --corr-idx 0 --th-method 0MP3D
python sparseviews_train_net.py --eval-only --num-gpus 1 --config-file configs/SparseViews_mp3d/mp3d_pose_phase_size480.yaml DATASETS.ROOT_DIR YOUR_DATASETS_FOLDER/mp3d/ MODEL.MASK_FORMER.TEST.SAVE_OUTPUT True OUTPUT_DIR ./output_eval_sparseview_mp3d TEST.VIS_PERIOD 500 MODEL.WEIGHTS output_train_sparseview_mp3d/model_final.pth
python eval.py --config-file configs/SparseViews_mp3d/mp3d_pose_phase_size480.yaml --rcnn-cached-file output_eval_sparseview_mp3d/inference/PlaneRecTRpp_outputs.pkl --dataset-phase sparseviews_mp3d_plane_test --match-threshold 0.08 --corr-idx 0 --th-method 0In addition, vis_planerec.py can be used to provide further visualizations presented in the paper.
We would like to thank NOPE-SAC, Mask2Former, rel-pose and other related open-source projects.