Add raft_large weights fined-tuned on Kitti#5081
Merged
NicolasHug merged 2 commits intopytorch:mainfrom Dec 9, 2021
Merged
Conversation
Contributor
💊 CI failures summary and remediationsAs of commit 2c14700 (more details on the Dr. CI page):
🕵️ 2 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
12 tasks
facebook-github-bot
pushed a commit
that referenced
this pull request
Dec 17, 2021
Reviewed By: fmassa Differential Revision: D33185012 fbshipit-source-id: 0cde871c6054416b86634865ec758034ec51519e
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
towards #4644
This PR adds pre-trained weights for raft-large, fine-tuned on Kitti.
I'm publishing both our weights and the original weights. The Kitti authors kindly allowed us to submit our code for evaluation on Kitti-test.
Our f1-epe on Kitti-test is 5.19 wheras the original is 5.10. We're a tiny bit higher (i.e. worse), but still significantly better than the other baselines that are compared against in the paper (next best one is 6.10).
There are submission restrictions, so to keep consistency with the C_T and C_T_SKHT weights, I just submitted the model that we already used in both (after kitti-specific fine-tuning). In other words these weights are litterally the current
C_T_SKHT_Vwith the last fine-tuning step on Kitti.Evaluated on kitti train:
kitti test submission:
cc @datumbox