© 1992-2012 IEEE. The usage of both off-The-shelf and end-To-end trained deep networks have significantly improved the performance of visual tracking on RGB videos. However, the lack of large labeled datasets hampers the usage of convolutional neural networks for tracking in thermal infrared (TIR) images. Therefore, most state-of-The-Art methods on tracking for TIR data are still based on handcrafted features. To address this problem, we propose to use image-To-image translation models. These models allow us to translate the abundantly available labeled RGB data to synthetic TIR data. We explore both the usage of paired and unpaired image translation models for this purpose. These methods provide us with a large labeled dataset of synthetic TIR sequences, on which we can train end-To-end optimal features for tracking. To the best of our knowledge, we are the first to train end-To-end features for TIR tracking. We perform extensive experiments on the VOT-TIR2017 dataset. We show that a network trained on a large dataset of synthetic TIR data obtains better performance than one trained on the available real TIR data. Combining both data sources leads to further improvement. In addition, when we combine the network with motion features, we outperform the state of the art with a relative gain of over 10%, clearly showing the efficiency of using synthetic data to train end-To-end TIR trackers.
|Journal||IEEE Transactions on Image Processing|
|Publication status||Published - 1 Apr 2019|
- deep learning
- generative networks
- thermal infrared
- Visual tracking