IIUM Repository

Deep learning-based single-shot and real-time vehicle detection and ego-lane estimation

Abdul Matin, M. A. A. and Ahmad Fakhri, A. S. and Mohd Zaki, Hasan Firdaus and Zainal Abidin, Zulkifli and Mohd Mustafah, Y. and Abd Rahman, H. and Mahamud, N. H. and Hanizam, S. and Ahmad Rudin, N. S. (2020) Deep learning-based single-shot and real-time vehicle detection and ego-lane estimation. Journal of the Society of Automotive Engineers Malaysia, 4 (1). pp. 61-72. ISSN 2600-8092 E-ISSN 2550-2239

[img] PDF - Published Version
Restricted to Registered users only

Download (1MB) | Request a copy


Vision-based Forward Collision Warning System (FCWS) is a promising assist feature in a car to alleviate road accidents and make roads safer. In practice, it is exceptionally hard to accurately and efficiently develop an algorithm for FCWS application due to the complexity of steps involved in FCWS. For FCWS application, multiple steps are involved namely vehicle detection, target vehicle verification and time-to-collision (TTC). These involve an elaborated FCWS pipeline using classical computer vision methods which limits the robustness of the overall system and limits the scalability of the algorithm. Deep neural network (DNN) has shown unprecedented performance for the task of vision-based object detection which opens the possibility to be explored as an effective perceptive tool for automotive application. In this paper, a DNN based single-shot vehicle detection and ego-lane estimation architecture is presented. This architecture allows simultaneous detection of vehicles and estimation of ego-lanes in a single-shot. SSD-MobileNetv2 architecture was used as a backbone network to achieve this. Traffic ego-lanes in this paper were defined as semantic regression points. We collected and labelled 59,068 images of ego-lane datasets and trained the feature extractor architecture MobileNetv2 to estimate where the ego-lanes are in an image. Once the feature extractor is trained for ego-lane estimation the meta-architecture single-shot detector (SSD) was then trained to detect vehicles. Our experimental results show that this method achieves real-time performance with test results of 88% total precision on the CULane dataset and 91% on our dataset for ego-lane estimation. Moreover, we achieve a 63.7% mAP for vehicle detection on our dataset. The proposed architecture shows that an elaborate pipeline of multiple steps to develop an algorithm for the FCWS application is eliminated. The proposed method achieves real-time at 60 fps performance on standard PC running on Nvidia GTX1080 proving its potential to run on an embedded device for FCWS.

Item Type: Article (Journal)
Additional Information: 8293/80104
Uncontrolled Keywords: Deep learning, Forward Collision Warning System (FCWS), ego-lane estimation, fine-tuning, feature extractor architecture, meta-architecture
Subjects: T Technology > TA Engineering (General). Civil engineering (General) > TA1001 Transportation engineering (General)
Kulliyyahs/Centres/Divisions/Institutes (Can select more than one option. Press CONTROL button): Kulliyyah of Engineering
Kulliyyah of Engineering > Department of Mechatronics Engineering
Depositing User: Dr. Hasan Firdaus Mohd Zaki
Date Deposited: 27 Apr 2020 11:26
Last Modified: 09 Dec 2020 16:27
URI: http://irep.iium.edu.my/id/eprint/80104

Actions (login required)

View Item View Item


Downloads per month over past year