Boshen Zhang

Results 42 comments of Boshen Zhang

Hi, mat files are generated using this script: https://github.com/zhangboshen/A2J/blob/master/data/icvl/data_preprosess.m

@NahianAlindo Hi, Sorry that I dont have video inference code currently, but the image inference code is provided (https://github.com/zhangboshen/A2J/blob/master/src/nyu.py), you can test your own video based on it.

Hi, 1. there are few choices that you can try, Faster RCNN (or whatever detector) human detector for bbox generation, and centers are borrowed from Gyeongsik et al. (https://github.com/mks0601/V2V-PoseNet_RELEASE) in...

> eed to finetune the Detector with depth dataset, right? and what is the depth dataset you used to finetune the Detector model? We finetune a FRCNN detector on the...

@logic03, Sorry that ITOP training code is kind of in a mess and requires efforts to reorganize, but most of the ITOP training details are similar to nyu code, except...

Hi, you can check this visualization code: https://github.com/mks0601/V2V-PoseNet_RELEASE/tree/master/vis

@nianniana , Hi, honestly, I don't remenber the exact time when training A2J. Well, it's not that slow, maybe 10-20 mins each epoch for the NYU dataset.

Hi, the .h5 file for ITOP dataset can be found here: https://www.alberthaque.com/projects/viewpoint_3d_pose/

@shangchengPKU Hi, 1) The idea of predicting offset instead of pixel-wise probability is wide-applied recently (e.g., "Dense 3D Regression for Hand Pose Estimation", "AWR: Adaptive Weighting Regression for 3D Hand...

@shangchengPKU Glad to hear that. And thank you for your kind attention to our work.