IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. ?, NO. ?, MARCH 2021 1 Blind Spot Warning System based on Vehicle Analysis in Stream Images by a Real-Time Self-Supervised Deep Learning Model Arash Pourhasan Nezhad, Mehdi Ghatee, Hedieh Sajedi Abstract—With the advent of intelligent systems, we are still facing a high number of fatal traffic accidents. Driver assistance systems can significantly reduce this rate. For example, when a driver uses a turn signal, driver assistance systems alert the object’s presence in blind spot areas. Camera-based driver assistance systems for blind spots usually alert by detecting objects, including vehicles, in image frames. Based on a more dynamic dangerous situation classification for lane changing and turning to the sides, we propose an efficient blind-spot warning system that works with a single camera sensor for each side. Our contribution consists of two sections. First, we take a deeper look at classifying dangerous and safe situations in a dynamic environment with moving objects. Second, to distinguish dangerous situations from safe conditions, we install a pre-trained SOTA object detector to track vehicles in consecutive frames and then estimate the distances of tracked cars by a 6% mean percentage error rate. In addition, to detect objects in blind spots, the proposed system uses cars’ relative velocity to warn dangerous situations. This classification process is not real-time. So, in the second section, we propose a tiny model as a driver assistance system for the blind spot that works in real-time. This tiny model feeds optical flow into CNN layers. This vision-based system uses self-supervised learning without the necessity of the labeled data. It shows 97% accuracy and can detect dangerous situations as a real-time system. Index Terms—Driver Assistant System Blind Spot Warning System Image Processing Deep Learning Self-Supervised Learn- ing. I. I NTRODUCTION According to the Statistics Center of Iran, hundreds of thousands of accidents occur in Iran every year, which unfor- tunately causes thousands of deaths. Twelve percent of these accidents occur due to wrong turning and lane changes. With the advancement of technology, this rate can be reduced by advanced driver assistance systems (ADAS) that play an essen- tial role in reducing human errors. There are several types of advanced driver assistance systems, each designed to improve driving and safety [1]. Also, different technologic approaches can be used based on image processing and machine learning to improve safety in transportation systems [2]. By default, mirrors are installed on both sides of all vehicles to widen the driver’s viewing angle. However, one of the limitations of these side mirrors installed on the car is creating A. Pourhasan Nezhad and M. Ghatee are with the Department of Mathe- matics and Computer Science, Amirkabir University of Technology, Tehran, Iran, e-mails: arashphn@aut.ac.ir, ghatee@aut.ac.ir. H.Sajedi is with School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, Iran, email: hhsajedi@ut.ac.ir. Manuscript received March 4, 2021; revised ?, ?. Fig. 1: Possible blind spots in heavy vehicles and cars (Blind spots in heavy vehicles are more extended than in cars.) Blind spots are usually a subset of potential blind spots shown in the image. They depend on several factors, such as the type and angle of the side mirrors. areas that are not visible, and usually, drivers are not aware of these areas. These areas are called blind spots, and vehicles’ presence in these areas increases accidents. Blind spots are different for different types of vehicles; see Fig. 1 for example. When turning sideways or changing lanes, limited viewing angles and blind spots are among the most common causes of car accidents. Various driver assistance systems have been developed for blind spots to prevent this type of accident. Sensors used in driver assistance systems can be divided into visual and non-visual types. For non-visual sensors, we can refer to radar-based systems that can measure distance with high accuracy. However, they are relatively expensive and have some limitations [3], [4]. In vision-based systems, a variety of cameras are commonly used as sensors. In this paper, similar to [4], [5], [6], a camera is used that is not expensive and provides a good field of view. Some systems have also used the fusion of several types of sensors [7], [8], [9], [10]. Other visual sensors, such as stereo cameras [11], [12], have also been used in some systems. In camera-based systems, the cameras are mounted on side mirrors to capture the images of blind spot areas. Some camera-based driver assistant systems can estimate the distance of vehicles [13], [14], [15], [16], [17]. To ac- curately estimate vehicles’ distance, using fusion with other sensors such as Lidar is more appropriate. The camera func- tions like the human’s eyes. The distant cars in the image have similar dimensions in terms of the number of pixels. Therefore,