Enhancing Drivers’ Attention By A Smart Binary Matching Machine to Avoid Accidents

Stress and sudden difficult situations have raised the risks of accidents down the roads. The drivers’ attention might be distracted out in seconds under unexpected circumstances, which could take place due to bad weather, vision problems, fatigue for long driving hours, damaged or broken Traffic light , and even children's noise inside the car. In this paper, I proposed to develop a special colourful Deep Back Propagation Neural Network to enhance drivers’ attention by observing different traffic light cases using a suggested smart binary matching machine system in Python. The smart machine system will analyse and identify the real Traffic light from art signs, broken or damaged ones; in addition to pedestrian signs based on a Database symbols for each case, which have taken the basic Traffic light and signs, and developed them to damaged cases or unreal one, before making the right decision by the learned network, then send an enhanced feedback signal to the driver. The algorithm consisted of accurate image processing steps, with two long stages of full contents features extraction vectors to be handled by Red-Yellow-Green Shallow and Deep Back Propagation Neural Networks (SBPNN) and (DBPNN) for each complex case. As a result, the algorithm rated a high accuracy of 100%, which is the most important factor to maintain safety, recoding the true label output as 1-value, with a predicated tested ouput 1.0-value. The suggested system does not replace the driver's one decision, yet it is an enhancing backup classification and recognition system before things move out of control. The feedback signal calculated based on reducing costs for 2500 iterations with The leas minimum value 000012,and can be developed as a voice signal warning Message, to increase the awareness of the drivers, besides the warning text on the screen.


INTRODUCTION
Recently, deep learning neural networks have made a robust technique in representing Real -World applications for computer vision tasks.It has become a powerful tool to support detection, classification , and recognition processes, working on different sets of Databasses, including structured and unstructured data.Traffic Light Recognition (TLR) is still presented by many researchers to develop the concept of Intelligent Transportation Systems (ITS) [1,2].Deep learning networks are not a replacement for human Drivers but work mostly as a saving option, and warning systems when things go out of control.In addition, driving for long hours increases the risk of fatigue, problems, which in turn increase accidents.Distracted driving not only puts drivers' lives at risk but poses a great risk to other passengers and people passing on the same road.
The limitation of Artificial Intelligence Neural Networks (AINNs) has been reduced gradually with the computer Hardware's ontinuous improvement [3].Deep learning has also developed the Machine learning methods to simulate and mimic the neural model of humans' brains on how to process information, and make decisions [2,4,5].Traffic facilities are available with guidance and instructions to direct the drivers on the road.It contains Traffic light , pedestrian signs, Crosswalks, Stops And under-construction signs and symbols [6].Therefore, in this proposed paper, I focused on Traffic light , classifying the lights by their colours to make the right decision by the intelligent machine, and follow the correct action accordingly when Traffic light Conditions are working Probably There are some cases that must be taken into onsideration when working with intelligent Machines to mimic human brains: 1.Traffic light is damaged, broken, or not working at all, which means all the lights are turned off.2. One of the lights' colours is fading, or giving a weak signal.3. Transition between lights' states, and it happened in the case of red-yellow lights, the machine should know it is only in a ready state, and it should not move.4. All lights are working at the same time, this case is rare, but not impossible under bad weather circumstances, power outage, or other mechanical failure.Real world problems are very challenging to intelligent machines, it has to predict the right decision before taking the Accurate action.However, deep learning algorithms are can deal with dynamic events in real time, and work on massive numbers of data with many parameters, and huge databases [7].

LITERATURE REVIEW
Traffic light , signs detection, classification, and recognition have been recently under investigation for massive numbers of research.With deep learning, artificial intelligence took a new course in developing auto driving systems, smart vehicles, and intelligent transportation systems.The concepts are quite different with the very same principles, but the aim is one, safety.According to [1], The authors proposed a deep Learning -based detection network with prior maps for autonomous cars.The authors in refernce [2] roposed to use Traffic Sign Recognition (TSR) not only to protect Drivers but also to inspect the using of traffic signs on roads accurately, reducing the need to depend on human power and resources.Moreover, feature extraction is one of the most important steps before working with deep learning, it could be a challenging step in selecting images and building databases.They integrate the power of deep learning to recognize relevant Traffic light of predefined routes by combining a stateofdeep detector with precise location.Another work made use of a real-time detection concept with a based camera system [4], proposing a deepTLR and classifying the lights with a single deep convolutional network.As Traffic Sign Recognition (TSR) became an important application to avoid traffic accidents, [6] Proposed lightweight real-time traffic signs images based on YOLO by combining the methods of deep learning, working on real scenes' data sets, and comparing them to the current detecting model.So working with real-time Traffic light gives practical significance to improving road traffic safety [8].
Another challenging problem in working with images recognition is driving in complex scenarios, weather conditions, and external environments could affect how the autonomous cars recognize images, according to [9], integrated deep learning and multi-sensor data fusion assist (MSDA) by improving visionbased Traffic light recognition algorithm, according to the used technique in [10], also integrated the Computer vision and machine learning with CNN to extract and detect features from Avisual camera, improving recognition accuracy with an on-board GPS sensor to identify regionofinterest in the image, which contains the Traffic light , assisting in improve recognition under low illumination conditions.Besides, other research papers worked on the problem of identifying and perceiving traffic symbols and signs using CNN deep networking, based on color segmentation and shape matching [11,12,13,14].
Using automatic Large-scale data or satellite imagery with deep learning also enhanced the process of image classification as in [11], who proposed a comparison study of models trained to solve the problem of crosswalk classification.[13,15], proposed two points of view, traffic sign detection, and evaluating deep neural network architectures for object detection [16].
Another technique with a lightweight deep network to classify traffic signs was proposed by [17], who designed a training model (the teacher network) to transfer knowledge to a smaller model (the student).
From exploring many numbers of researches, most of them have used CNN for features extraction, [18] introduced a MicroNet, A highly compact CNN for classifying real-time embedded traffic signs, while [19] introduced a Fully Convolutional Network (FCN), to investigate the effect of extracting features on an imbalanced dataset of small objects based on shape and colors.In addition, many techniques were produced in Traffic light image classification, [20], proposed (SSRN), a supervised deep learning, using a 3D convolutional layer, in which every layer regularizes the learning process and improves the classification performance.[21], proposed (TSingNet) to detect and recognize occluded and small traffic signs in the wilds, based on a pyramid network to learn scale-aware and context-rich features.Image processing pipeline and CNN were suggested to learn the behavior of self-driving [22].Other papers used the powerful effect of CNN.[23,24,25,26,27,28] So, using deep learning in the classification f traffic networks has produced higher accuracy than traditional machine learning [29,30,31].
Analyzing Traffic light and signs for detection, classification , and recognition processes are still considered in developing Computer-vision and machine learning [32].Further investigation will require more papers on the same subject, following the changes in the roads, cities, and countries around the world.

Shallow Back Propagation Neural Network as a Binary Classifier for Traffic light (SBPNN)
Supervised learning is an important part of machine learning, it requires a mapping between input and a correct data output to predict an actual output from unknown data [25, 30, eq.( 1)].It aims to accurately predict the classification of tested data based on a previously defined Database with trained samples.Shallow Neural Networks algorithms are used in supervised learning, with a single hidden layer, where features are simply extracted from the training data to identify patterns of the database, as in the following equation:  = ().… (1) Where, : The predicted output.(): Function the input In the paper, the traffic light is assumed to be a supervised binary classifier for light colors, in which only one state will be logic one as the green light turns on, and the driver should move, while the other states require the driver to stop when the logic zero is resulted on the output.Table (1) summaries the required action for each light color.The suggested SBPNN was trained with two inputs-binary states, five hidden units, and one output, [2-5-1] layers dimension, built and evaluated in Python to satisfy the results of Table (1), recording a 100% accuracy with a decision boundary, and classifying the states of the lights colours according to their output states, see Figure (1), where 3-binary inputs under different labels (Stop, Wait, and Ready) ask the driver to stop the car in the red boundary when the output is logic zero, while one state requires the driver to move its car when the output is logic one for green light input.Consequently, the results will be the same no matter how many hidden units are added to the hidden layer.Increasing the number of training iterations will result in decreasing the cost value as can be noticed, and the decision boundary result will be the same, reporting cost after iteration 0 is 0.693147, and cost after iteration 9000 is 0.00003.The error has decreased to a minimum value that no longer training is required.The proposed SBPNN-binary classifier works well with the basic rules of classifying Traffic light according to binary values only, but it is limited to work with unstructured data like images, as they are hard to understand by computers, containing many complicated features, such as colours, lights, illumination, shape-based, textures and morphological operations.So, with Traffic light classification and recognition, the probability of changing the above states is very possible in the real world, considering weather conditions, sudden shutdown of electrical power, and damaged Trafficlight From this concept, the (SBPNN) was developed to work as a supervised (DBPNN), where the features of images can be extracted gradually as they pass through the network's layers, causing the image to diminish eventually passing each layer [28,33].The low level features are extracted by the initial layers, and the highlevel features are extracted by the deeper layers, combining a full representation for the input, where the binary classifier output is either one or zero.The algorithm aims to reduce the error between its predicted result, and training data models to recognize the patterns [31,32,34].Tables (2, 3, and 4

Stop
According to Table 2 at the time that the red ight is on, other lights should be off, it is the case of (1-0-0), where the logical output should be (1), and the driver should stop, if it happens and the green (0-0-1), or yellow light (0-1-0), turns on when it is not supposed to be then the logical output should be (0), reporting a damaged state in that case.By counting the required seconds for each light when it turns on, the damaged or wrong state can be accurately specified.The same rules have been applied for binary mapping yellow light compared to other lights' cases as can be seen in Table (3), and for green light as can be seen in Table (4).

Image Processing in Python
Image processing required two necessary steps before starting the main processing in Python: 1.All RGB-images were converted to jpg format.
2. All Images were resized to a new dimension (157 x 374) pixels.
In the paper, a Jupyter notebook was used to set the code and import the necessary libraries for the interactive code environment.
Matplotlib: to plot graphics Numpy: a fundamental package for scientific computing.
Skimage: to import images from database environment files.
PIL and Scipy: to test the network with other images.
The main Processing required the following steps: And because all images imensions were set to (157 x 374) pixels, the three channel matrices are set to (157 x 374) each.Therefore, to create a feature vector X with n-dimension, each pixel intensity value in the image cells must be unrolled or reshaped for each colour channel.
[23, eq. ( 2)], is used to reshape the new dimension for each image.Now we have a feature vector, or called an object for each traffic colour image, with total pixels (176154).
The technique of vectorization is very helpful when dealing not only with Traffic light images, but also with other signs and symbols images, as it considers all the features of images, representing an object vector, which can be easily understood by the computer, or any intelligent machines, it is faster compared to loops, considering the fact that Jupyter notebook is working on C.P.U For the output post layer and the next layer of the network, two types of activation functions were used: 1. Relu Function: A rectified linear Unit that belong to non-linear activation function.With Relu view neurons are activated at a time, which makes training of DBPNN easier, and avoids slow learning that could lead the values of hidden units to reach zero [27,34].
2. Sigmoid Function: Is used for binary classification [28], where the predicted output or called the actual output  ^ is in the following binary range: 0 ≤  ^≤ 1 Where,  ^ can be calculated according to the following equation with ith training iteration: With sigmoid activation function equation (6): For error calculation, two important functions were calculated: The deep neural network algorithm's steps can be summarized as shown in Figure (7), from [34] the algorithm starts from initializing parameters to calculating activated output during forward propagation, and updating the parameters after passing the backward propagation, the training will continue for 2500-3000 iterations in order to reduce the cost, decreasing the loss rate between the predicted output by DBPN and the correct one.

MATCHING AND TESTING PROCESSES
Since our DBPNN is working as a binary classifier, the matching process will depend on binary states describing the traffic light output, and the decision that must be taken into consideration for each state as shown in Table (5), which summarizes the matching and testing results.When one of the traffic ight colors turns on, the captured image will pass to red-DBPNN, yellow DBPNN, and green-DBPNN at the same time, if the tested image matches the defined traffic light color by DBPNN, the predicted output should be logic one, a text Message printed on screen will warn the driver for the next action to take, the intelligent machine should send a feedback signal, helping to draw the attention of the driver more on the road.

Table 5:Binary Mapping of Traffic Light Matching and Testing Processes
To test the DBPNN, a test function was built with image processing steps to read other patterns of traffic light colors, signs, and symbols images, then predict the output to measure how accurate the network is in producing the desired output, which is consideres to be the right decision to take on the road by the intelligent machine.The matching steps were added to the same test function, to make a comparison between the defined images in the intelligent machine database and the one that was entered, two more steps were added to direct the

RESULTS AND DISCUSSIONS
The results were arranged according to each DBPNN traffic light color, making decisions based on the predicted output, with true label values that reflect the correct output for the trained Color-DBPNN.The accuracy of DBPNN is 100%, a percentage of data that are correctly classified, and calculated as one in Python.The costs were taken briefly from iteration 0 to iteration 2400.To show the error rate value between the correct output in the database and the one that resulted according to one of the discussed cases before.Table (6) shows the proposed abbreviation for each image name, representing light colors.Table (7) represents the results of matching and testing Red-DBPNN, and how the intelligent machine sends the right feedback to the driver.The Yellow and Green DBPNN should be off in that case, reporting the binary state for the red light as 100, while the pedestrian and other signs were all set to (000) binary state.The required time between connecting to Jupyter Notebook and loading the program to apply the required parameters and calculate the cost function for each traffic light color is about one minute, while the matching process takes about 25 seconds, and it is the shortest time that could be needed to analyze the signs, symbols art, and traffic light's states down the road, then send a feedback Message to the driver, to inform him about the current state and enhance his decision to be more careful.See Figure (A-2, Appendix A).There are many techniques that help to minimize the effects of driver distraction, but still, the number of hazards on the road is very high.Neural networks were applied on the road infrastructure to alert drivers about any potential hazards, Germany is one of those countries, which uses self-learning system with radar, sensors, and cameras to pick out moving objects on the road, and the alert will be sent to drivers in car warning displays or using streets lights [35,36].In Iraq, we don't have such techniques, so we could have a smart system in cars with deep learning neural network, making use of cameras to pick up Traffic changes, and signs and update them every few seconds, comparing their states whether they were matching the defined cases in the database, so we are focusing on the driver to keep attention.The evolution in Python was powerful in satisfying image processing, and normalizing all the images to fit in the database.Besides designing a binary matching system to compare the defined image in (DBPNN) according to the trained color Yellow-DBPNN and Green-DBPNN, and the input image, which reflected different cases under unexpected circumstances that might take place in now days.
The results are promising though the difficulties in collecting images, and processing them to be in one format and shape to avoid any difference in extracting features; furthermore, the difficulty in searching and finding new concepts among many researches to design traffic light classification and recognition system using deep neural network, a new concept was proposed to design a model of two stages, first stage was represented by building SBPNN for each traffic light color, and connected it to the second stage, which was represented by building DBPNN for each color as was mentioned before.The results showed how successful as the model in predicting the on/off state for each traffic light color, directing the intelligent machine to send the right feedback to the driver, using the proposed binary matching system, though the evolution in Python as not easy, especially in designing the match and test processes, it took time before satisfying the required results, the purpose of the feedback signals were to enhance the awareness of the driver more on the road in order to keep safety.This system can be developed in the future, to add certain hardware that uses audio voices as warning Messages, it will not enhances the awareness of the driver only but helps to understand how to deal with unexpected situations related to traffic light work.There is no 100% of any system that could function properly without getting affected by the bad weather, or sudden electrical shutdown that may affect the whole system.

Fig. 1
Fig.1 Binary Classifier (SBPNN) for Traffic light ' Results by Python 3.2 Deep Back Propagation Neural Network as a Binary Classifier for Traffic light (DBPNN) ) show the proposed binary classifier map for the developed (Deep Back Propagation Neural Network), and how to understand its work hypothetically for repressing each light color.Table 2: Binary Mapping of Red-DBPNN when Red Light turns on The work does not depend on real-time images, but on a collection of real scenes and art images that represent the states of light colors under different circumstances as can be noticed of light output states in Tables(2,3,4).The database of the collected images contain three categories:A.Traffic light images, which function correctly, as shown in Figure(2).

Fig. 4
Fig. 4 Traffic Light Symbols D. Signs on the roads, which represent the following states, as shown in Figure(5): 1. Stop Sign 2. Crosswalk 3. under construction

1 .
Vectorization: this step requires creating a feature vector after reading each RGB-colourful image in a Numpy array pattern to fit the algorithm, each RGB image was separated tinto hree colour channels red, green, and blue respectfully, as shown in [26, Fig. A-3], [26, Appendix A].

2 .
Normalization: It is a process to standardize object features for each RGBcolourful image, considering the maximum value for the pixel channel, see [26, eq.(3)]  = ℎ     255 . . .(3)Where, : Represents the input for RGB image 3.3.3BUILDING A BINARY CLASSIFIER USING DPBNN ALGORITHM Back Propagation Neural Network (BPNN) is one of the most popular, and powerful artificial neural networks that can be used in deep learning Vol. 28, No. 2, September 2023, pp.261-283 algorithms to reduce the time of training.It uses optimization algorithms to calculate the gradient of the function for each iteration, reducing the randomness in neural networks [31].DBPNN was built in a two-stage model for each traffic light color, Red-DBPNN, Yellow-DBPNN, and Green-DBPNN.A. Two-layers Model (SBPNN)A two layer eural network was built with [1-5-1] dimensions in the first stage after passing image processing steps.X is the input of a trafficlight colour image with (176154) pixels, learning rate is of .0075,and A number of iteration 3000, the network was trained to return parameters in Python cache, saving them to be used in the second stage of DBPNN.See Figure( A-4, Appendix A), [26 34, Appendix A].It shows the architecture of the SBPPNN as a binary classifier for traffic light red colour, the same process was repeated for traffic light green and yellow colours.B. Four -layers Model (DBPNN) Four layers model of DBPNN was built in the second stage with [176154-1-7-5-1] dimensions, seven hidden units were used in the first hidden layer, five hidden units were used in the second hidden layer, as shown in Figure (A-5,Appendix A),[34, Appendix A], Parameter values, which represent the weights of the neural network, were selected as small as random values, too big values will lead hidden Unit values to increase, and the sigmoid function will be saturated, slowing the learning process.[]:is the weight matrix for L-layer []:is the bias vector in the L th layer Both matrices were used in the post layer and the next layer of the network, where L represents the number of layers in DBPNN.In order to keep weight random values from changing through layers, a Numpy random seed function was used, it saves the states of randomness, and calling the function multiple times will result in the same random numbers.

Fig. 6
a. Loss Function: It is a measurement of the discrepancy between the predictions or called the actual output  ^(i) and the correct output or called the target (), see [27,34 eq.(7)], it also depicts if the model failed to predict the required classification during training and testing the model.… (7) Where, () ^: The predicated output (): The desired output b.Cost Function: Parameters training requires defining cost function, which is the average of loss function calculated as in [26,34, eq.(8)], of the complete training set as in [26, fig.(6)] show the calculated cost function for both red light-SBPPNN and red light-DBPNN respectfully.… (8) Where, : ℎ ℎ   ℎ  : ℎ   () ^: The predicated output (): The desired output Cost Functions for a. Red Light-SBPNN and b.Red Light-DBPNN

Fig. 9
Fig. 9 Cost Functions for the difference between the Red-DBPNN defined image and unmatched state for the yellow traffic image

Fig. A- 1 .
Fig. A-1.Main flow chart of the computer program used in this Paper research Figure (A-2)shows the cost function from iteration 0 to iteration 2400 in Red-DBPNN, it was calculated to show the error value between the correct output, and the predicted one for each matched and tested traffic light color, and signs that reflect other information on the road.The same database of images were used in matching and testing Yellow-DBPNN and Green-DBPNN.

Fig. A- 3 .
Fig. A-3.A Vector of RGB-Images Features Separated into 3-Channels

Table 1 :
Binary Mapping of Traffic Light Colors

Table 3 :
Binary Mapping of Yellow-DBPNN when Yellow turns on

Table 4 :
Binary Mapping of Green-DBPNN when Green Light turns on

Table 6 :
Images' Names and Abbreviations of Lights' Colours States

Table 7 :
Binary Matching and Testing Results for Red-DBPNN

Table 8 :
Binary Matching and Testing Results for Yellow-DBPNN

Table ( 8
(9)hows the matching and testing results for Yellow-DBPNN, where Red-DBPNN and Green-DBPNN should be off, reporting the binary state for yellow light as 010, while Table(9)shows the matching and testing results for Green-DBPNN, where Red-DBPNN and Yellow-DBPNN should be off, reporting the binary state for green light as 001.

Table 9 :
Binary Matching and Testing Results for Green-DBPNN