Ashish Patel 🇮🇳’s Post

𝗗𝗮𝘆-𝟮𝟵𝟮 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗗𝗲𝗲𝗽𝗠𝗼𝗖𝗮𝗽:Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors by National Technical University of Athens, Greece and University of Lincoln, UK Follow me for a similar post: 🇮🇳 Ashish Patel Interesting Facts : 🔸 This paper is published Sensor(MDPI2019) with 10 Citation. ------------------------------------------------------------------- 𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵: https://lnkd.in/eCQH5rhJ code: https://lnkd.in/eHxJPkK8 ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 The field of human pose tracking, also known as motion capture (MoCap), has been studied for decades and is still a very active area of research. The technology is used in gaming, virtual/augmented reality film making computer graphics animation, and others to provide body (and facial) motion data of character animations; humanoid robot control motions while interacting with a system or device’s interface screens. 🔸 In this paper, a marker-based, single-person optical motion capture method (DeepMoCap) is proposed using multiple spatio-temporally aligned infrared-depth sensors and retro-reflective straps and patches (reflectors).  🔸DeepMoCap explores motion capture by automatically localizing and labeling reflectors on depth images and, subsequently, on 3D space. Introducing a non-parametric representation to encode the temporal correlation among pairs of colorized depthmaps and 3D optical flow frames, a multi-stage Fully Convolutional Network (FCN) architecture is proposed to jointly learn reflector locations and their temporal dependency among sequential frames.  🔸 The extracted reflector 2D locations are spatially mapped in 3D space, resulting in robust 3D optical data extraction. The subject's motion is efficiently captured by applying a template-based fitting technique on the extracted optical data.  🔸 Two datasets have been created and made publicly available for evaluation purposes; one comprising multi-view depth and 3D optical flow annotated images (DMC2.5D), and a second, consisting of spatio-temporally aligned multi-view depth images along with skeleton, inertial and ground truth MoCap data (DMC3D).  🔸 The FCN model outperforms its competitors on the DMC2.5D dataset using 2D Percentage of Correct Keypoints (PCK) metric, while the motion capture outcome is evaluated against RGB-D and inertial data fusion approaches on DMC3D, outperforming the next best method by 4.5% in total 3D PCK accuracy. #computervision #artificialintelligence #innovation

  • a group of different colored flowers

To view or add a comment, sign in

Explore content categories