End-to-end global to local convolutional neural network learning for hand pose recovery in depth data

Meysam Madadi *, Sergio Escalera, Xavier Baró, Jordi Gonzalez Sabate

*Corresponding author for this work

Research output: Contribution to journalArticleResearchpeer-review


Despite recent advances in 3-D pose estimation of human hands, thanks to the advent of convolutional neural networks (CNNs) and depth cameras, this task is still far from being solved in uncontrolled setups. This is mainly due to the highly non-linear dynamics of fingers and self-occlusions, which make hand model training a challenging task. In this study, a novel hierarchical tree-like structured CNN is exploited, in which branches are trained to become specialised in predefined subsets of hand joints called local poses. Further, local pose features, extracted from hierarchical CNN branches, are fused to learn higher order dependencies among joints in the final pose by end-to-end training. Lastly, the loss function used is also defined to incorporate appearance and physical constraints about doable hand motions and deformations. Finally, a non-rigid data augmentation approach is introduced to increase the amount of training depth data. Experimental results suggest that feeding a tree-shaped CNN, specialised in local poses, into a fusion network for modelling joints' correlations and dependencies, helps to increase the precision of final estimations, showing competitive results on NYU, MSRA, Hands17 and SyntheticHand datasets.
Original languageEnglish
Pages (from-to)50
Number of pages66
JournalIET computer vision (Print)
Publication statusPublished - 12 Aug 2021


Dive into the research topics of 'End-to-end global to local convolutional neural network learning for hand pose recovery in depth data'. Together they form a unique fingerprint.

Cite this