Collaborative Driver Assistance System


In this research, we cooperate with Intel-NTU Connected Context Computing Center. The goal is to develop a context-based micro-navigation system, which will provide instructions, such as changing left/right lane or speed-up/down, so that the driver can increase his safety margin to avoid a potential traffic hazard, while improving his driving efficiency to avoid traffic jam. To provide such micro-navigation service, the system needs to combine different technologies from various expertises, including M2M observations, distributed learning, and anticipatory reasoning. To achieve the goal, this research integrates multiple research projects in the center, with additional helps from the business team and the user experience team, and groups the researchers into four technical teams:

  1. Sensing: exploit all visual information from vehicles and roadside units for better neighbor map construction. In this research, a vision-based positioning technique with sub-meter accuracy is proposed. Furthermore, we integrate multiple cameras to provide an intuitive perspective view of vehicle surrounding for better driver monitoring.
  2. Communication: provide reliable vehicle-to-vehicle and vehicle-to-roadside communications to support a wide range of ITS applications with different requirements.
  3. Data Analysis: identify and present drivers’ aggressive behaviors, and predict future potential dangerous events.
  4. User Experience: study user needs for providing visual feedback and to analyze the effects of different visualization methods.

Currently, we focus on developing two information visualization techniques for internet-of-vehicles (IoV): transparent car and giraffe view. For transparent car, we demonstrate a proof-of-concept system and find that it prefers a vehicular network with low loss rate than that with high data rate. For giraffe view, we build a driving simulator to test the usability of giraffe view and learn that 97% of the participants wished to have it to assist their daily driving and 93% of them felt that they drove more smoothly and efficiently while using it.


    • Kuan-Wen Chen, Chun-Hsin Wang, Xiao Wei, Qiao Liang, Ming-Hsuan Yang, Chu-Song Chen, and Yi-Ping Hung, “Vision-Based Positioning with Sub-meter Accuracy for Internet-of-Vehicles,” IPPR Conference on Computer Vision, Graphics, and Image Processing, Aug., 2015.
    • Yen-Ting Yeh, Chun-Kang Peng, Kuan-Wen Chen, Yong-Sheng Chen, and Yi-Ping Hung, “Driver Assistance System Providing an Intuitive Perspective View of Vehicle Surrounding,” International Workshop on My Car Has Eyes: Intelligent Vehicle With Vision Technology, Singapore, Nov. 2014.
    • Kuan-Wen Chen, Shen-Chi Chen, Kevin Lin, Ming-Hsuan Yang, Chu-Song Chen, and Yi-Ping Hung, “Object Detection for Neighbor Map Construction in an IoV System,” IEEE International Conference on Internet of Things (iThings), Sep. 2014.
    • Kuan-Wen Chen, Hsin-Mu Tsai, Chih-Hung Hsieh, Shou-De Lin, Chieh-Chih Wang, Shao-Wen Yang, Shao-Yi Chien, Chia-Han Lee, Yu-Chi Su, Chun-Ting Chou, Yuh-Jye Lee, Hsing-Kuo Pao, Ruey-Shan Guo, Chung-Jen Chen, Ming-Hsuan Yang, Bing-Yu Chen, and Yi-Ping Hung, "Connected Vehicle Safety – Science, System, and Framework," IEEE World Forum on Internet of Things (WF-IoT), Seoul, Korea, Mar. 2014.