TY - JOUR
T1 - Drift-proof tracking with deep reinforcement learning
AU - Chen, Zhongze
AU - Li, Jing
AU - Wu, Jia
AU - Chang, Jun
AU - Xiao, Yafu
AU - Wang, Xiaoting
PY - 2022
Y1 - 2022
N2 - Object tracking is an essential and challenging sub-domain in the field of computer vision owing to its wide range of applications and complexities of real-life situations. It has been studied extensively over the last decade, leading to the proposal of several tracking frameworks and approaches. Recently, the introduction of reinforcement learning and the Actor-Critic framework has effectively improved the tracking speed of deep learning trackers. However, most existing deep reinforcement learning trackers experience a slight performance degradation mainly owing to the drift issues. Drifts pose a threat to the tracking performance, which may lead to losing the tracked target. Herein, we propose a drift-proof tracker with deep reinforcement learning that aims to improve the tracking performance by counteracting drifts while maintaining its real-time advantage. We utilize a reward function with the Distance-IoU (DIoU) metric to guide the reinforcement learning to alleviate the drifts caused by the trained model. Furthermore, double negative samples (hard negative and drift samples) are constructed in tracking for network initialization, which is followed by calculating the loss by a small error-friendly loss function. Therefore, our tracker can better discriminate between the positive and negative samples and correct the predicted bounding boxes when the drift occurs. Meanwhile, a generative adversarial network is adopted for positive sample augmentation. Extensive experimental results on multiple popular benchmarks show that our algorithm effectively reduces the occurrences of drift and boosts the tracking performance, compared to those of other state-of-the-art trackers.
AB - Object tracking is an essential and challenging sub-domain in the field of computer vision owing to its wide range of applications and complexities of real-life situations. It has been studied extensively over the last decade, leading to the proposal of several tracking frameworks and approaches. Recently, the introduction of reinforcement learning and the Actor-Critic framework has effectively improved the tracking speed of deep learning trackers. However, most existing deep reinforcement learning trackers experience a slight performance degradation mainly owing to the drift issues. Drifts pose a threat to the tracking performance, which may lead to losing the tracked target. Herein, we propose a drift-proof tracker with deep reinforcement learning that aims to improve the tracking performance by counteracting drifts while maintaining its real-time advantage. We utilize a reward function with the Distance-IoU (DIoU) metric to guide the reinforcement learning to alleviate the drifts caused by the trained model. Furthermore, double negative samples (hard negative and drift samples) are constructed in tracking for network initialization, which is followed by calculating the loss by a small error-friendly loss function. Therefore, our tracker can better discriminate between the positive and negative samples and correct the predicted bounding boxes when the drift occurs. Meanwhile, a generative adversarial network is adopted for positive sample augmentation. Extensive experimental results on multiple popular benchmarks show that our algorithm effectively reduces the occurrences of drift and boosts the tracking performance, compared to those of other state-of-the-art trackers.
KW - Object tracking
KW - deep reinforcement learning
KW - drift problems
UR - http://www.scopus.com/inward/record.url?scp=85101752828&partnerID=8YFLogxK
U2 - 10.1109/TMM.2021.3056896
DO - 10.1109/TMM.2021.3056896
M3 - Article
AN - SCOPUS:85101752828
SN - 1520-9210
VL - 24
SP - 609
EP - 624
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
ER -