Target Tracking across Multiple Cameras

Team Members

Li-Wei Chan, Hsiang-Tao Wu, Hui-Shan Kao, Home-Ru Lin, Ju-Chun Ko, Mike Y. Chen, Jane Hsu and Yi-Ping Hung


We have developed an adaptive learning method for tracking targets across multiple cameras with disjoint filed of views. There are usually two visual cues employed for tracking targets across cameras: spatial-temporal cue and appearance cue. To learn the relationships among cameras. Traditional methods learning the relationships by using either hand-labeled correspondence or batch-learning procedure are applicable when the environment remains unchanged. However, in many situations such as lighting changes, the environment varies seriously and hence traditional methods fail to work. In our system, we propose an unsupervised method which learns adaptively and can be applied to long-term monitoring. Furthermore, we propose a method that can avoid weak links and discover the true valid links among the entry/exit zones of cameras from the correspondence. Our method outperforms existing methods in learning both the spatio-temporal and the appearance relationship, and can achieve high tracking accuracy in both indoor and outdoor environment.