Driver Drowsiness Detector System using Raspberry Pi and OpenCV

Published  September 3, 2020   17
Driver Drowsiness Detector System using Raspberry Pi and OpenCV

Truck drivers who transport the cargo and heavy materials over long distances during day and night time, they often suffer from lack of sleep. fatigue and drowsiness are some of the leading causes of major accidents on Highways. The automobile industries are working on some technologies that can detect the drowsiness and alert the driver about it.

In this project, we are going to build a Sleep Sensing and Alerting System for Drivers using Raspberry Pi, OpenCV, and Pi camera module. The basic purpose of this system is to track the driver’s facial condition and eye movements and if the driver is feeling drowsy, then the system will trigger a warning message. This is the extension of our previous facial landmark detection and Face recognition application

Components Required

Hardware Components

  • Raspberry Pi 3
  • Pi Camera Module
  • Micro USB Cable
  • Buzzer

Software and Online Services

  • OpenCV
  • Dlib
  • Python3

Before proceeding with this driver drowsiness detection project, first, we need to install OpenCV, imutils, dlib, Numpy, and some other dependencies in this project. OpenCV is used here for digital image processing. The most common applications of Digital Image Processing are object detectionFace Recognition, and people counter.

Here we are only using Raspberry Pi, Pi Camera, and a buzzer to build this Sleep detection system.

Driver Drowsiness Detector System using Raspberry Pi

Installing OpenCV in Raspberry Pi

Before installing the OpenCV and other dependencies, the Raspberry Pi needs to be fully updated. Use the below commands to update the Raspberry Pi to its latest version:

sudo apt-get update

Then use the following commands to install the required dependencies for installing OpenCV on your Raspberry Pi.

sudo apt-get install libhdf5-dev -y 
sudo apt-get install libhdf5-serial-dev –y 
sudo apt-get install libatlas-base-dev –y 
sudo apt-get install libjasper-dev -y 
sudo apt-get install libqtgui4 –y
sudo apt-get install libqt4-test –y

Finally, install the OpenCV on Raspberry Pi using the below commands.

pip3 install opencv-contrib-python==4.1.0.25

If you are new to OpenCV, check our previous OpenCV tutorials with Raspberry pi:

We have also created a series of OpenCV tutorials starting from the beginner level.

Installing other Required Packages

Before programing the Raspberry Pi for Drowsiness Detector, let’s install the other required packages.

Installing dlib: dlib is the modern toolkit that contains Machine Learning algorithms and tools for real-world problems. Use the below command to install the dlib.

pip3 install dlib

Installing NumPy: NumPy is the core library for scientific computing that contains a powerful n-dimensional array object, provides tools for integrating C, C++, etc.

pip3 install numpy

Installing face_recognition module: This library used to Recognize and manipulate faces from Python or the command line. Use the below command to install the face recognition library.

Pip3 install face_recognition

And in the last, install the eye_game library using the below command:

pip3 install eye-game

Programming the Raspberry Pi

Complete code for Driver Drowsiness Detector Using OpenCV is given at the end of the page. Here we are explaining some important parts of the code for better understanding.

So, as usual, start the code by including all the required libraries.

import face_recognition
import cv2
import numpy as np
import time
import cv2
import RPi.GPIO as GPIO
import eye_game

After that, create an instance to obtain the video feed from the pi camera. If you are using more than one camera, then replace zero with one in cv2.VideoCapture(0) function.

video_capture = cv2.VideoCapture(0)

Now in the next lines, enter the file name and path of the file. In my case, both the code and file are in the same folder. Then use the face encodings to get the face location in the picture.

img_image = face_recognition.load_image_file("img.jpg")
img_face_encoding = face_recognition.face_encodings(img_image)[0] 

After that create two arrays to save the faces and their names. I am only using one image; you can add more images and their paths in the code.

known_face_encodings = [
    img_face_encoding ]
known_face_names = [
    "Ashish"
]

Then create some variables to store the face parts locations, face names, and encodings.

face_locations = []
face_encodings = []
face_names = []
process_this_frame = True

Inside the while function, capture the video frames from the streaming and resize the frames to smaller size and also convert the captured frame to RGB color for face recognition.

ret, frame = video_capture.read()
 small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
 rgb_small_frame = small_frame[:, :, ::-1]

After that, run the face recognition process to compare the faces in the video with the image. And also get the face parts locations.

if process_this_frame:
        face_locations = face_recognition.face_locations(rgb_small_frame)
        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
        cv2.imwrite(file, small_frame)

If the recognized face matches with the face in the image, then call the eyegame function to track the eye movements. The code will repeatedly track the position of eye and eyeball. 

face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
            best_match_index = np.argmin(face_distances)
            if matches[best_match_index]:
                name = known_face_names[best_match_index]
                direction= eye_game.get_eyeball_direction(file)
                print(direction)

If the code doesn’t detect any eye movement for 10 seconds, then it will trigger the alarm to wake up the person.

else:
                    count=1+count
                    print(count)
                    if (count>=10):
                       GPIO.output(BUZZER, GPIO.HIGH)
                       time.sleep(2)
                       GPIO.output(BUZZER, GPIO.LOW)
                       print("Alert!! Alert!! Driver Drowsiness Detected ")

Then use the OpenCV functions to draw a rectangle around the face and put a text on it. Also, show the video frames using the cv2.imshow function.

cv2.rectangle(frame, (left, top), (right, bottom), (0, 255, 0), 2)
 cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 255, 0), cv2.FILLED)
 font = cv2.FONT_HERSHEY_DUPLEX
 cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (0, 0, 255), 1)
 cv2.imshow('Video', frame)
Set the Key ‘S’ to stop the code.
if cv2.waitKey(1) & 0xFF == ord('s'):
        break

Testing the Driver Drowsiness Detection System

Once the code is ready, connect the Pi camera and buzzer to Raspberry Pi and run the code. After Approx 10 seconds, a window will appear with the live streaming from your Raspberry Pi camera. When the device recognizes the face, it will print your name on the frame and start tracking the eye movement. Now close your eyes for 7 to 8 seconds to test the alarm. When the count becomes more than 10, it will trigger an alarm, alerting you about the situation.

Driver Drowsiness Detection System

This is how you can build Drowsiness Detector using OpenCV and Raspberry Pi. Scroll down for the working video and Code.

Code
import face_recognition
import cv2
import numpy as np
import time
import cv2
import eye_game
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
BUZZER= 23
GPIO.setup(BUZZER, GPIO.OUT)
previous ="Unknown"
count=0
video_capture = cv2.VideoCapture(0)
#frame = (video_capture, file)
file = 'image_data/image.jpg'
# Load a sample picture and learn how to recognize it.
img_image = face_recognition.load_image_file("img.jpg")
img_face_encoding = face_recognition.face_encodings(img_image)[0]
# Create arrays of known face encodings and their names
known_face_encodings = [
    img_face_encoding   
]

known_face_names = [
    "Ashish"
]

# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True
while True:
    # Grab a single frame of video    
    ret, frame = video_capture.read()    
    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)   
    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]
    # Only process every other frame of video to save time
    if process_this_frame:
        # Find all the faces and face encodings in the current frame of video
        face_locations = face_recognition.face_locations(rgb_small_frame)
        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
        cv2.imwrite(file, small_frame)
        face_names = []
        for face_encoding in face_encodings:
            # See if the face is a match for the known face(s)
            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
            name = "Unknown"            
            face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
            best_match_index = np.argmin(face_distances)
            if matches[best_match_index]:
                name = known_face_names[best_match_index]                
                direction= eye_game.get_eyeball_direction(file)
                print(direction)
                #eye_game.api.get_eyeball_direction(cv_image_array)
                if previous != direction:
                    previous=direction                  
                else:
                    print("old same")
                    count=1+count
                    print(count)
                    if (count>=10):
                       GPIO.output(BUZZER, GPIO.HIGH)
                       time.sleep(2)
                       GPIO.output(BUZZER, GPIO.LOW)
                       print("Alert!! Alert!! Driver Drowsiness Detected")
                       cv2.putText(frame, "DROWSINESS ALERT!", (10, 30),

cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
            face_names.append(name)         
    process_this_frame = not process_this_frame
    # Display the results
    for (top, right, bottom, left), name in zip(face_locations, face_names):
        # Scale back up face locations since the frame we detected in was scaled to 1/4 size
        top *= 4
        right *= 4
        bottom *= 4
        left *= 4
        # Draw a box around the face
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 255, 0), 2)
        # Draw a label with a name below the face
        cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (0, 0, 255), 1)
        #cv2.putText(frame, frame_string, (left + 10, top - 10), font, 1.0, (255, 255, 255), 1)
    # Display the resulting image
    cv2.imshow('Video', frame)
    # Hit 'q' on the keyboard to quit!
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()
Video

Have any question realated to this Article?

Ask Our Community Members

Comments

I am working on  drowsiness detection project. Can I use the above code as is or di i need to modify it. are there anyother files other drowsiness_detection.py file that are required to run this code

 

I was using this code and I get an error at line 19. Here is my code

16 file = 'image_data/img.jpg'
17 # Load a sample picture and learn how to recognize it.
18 img_image = face_recognition.load_image_file("/home/pi/Desktop/Pi_project/img.jpg")
19 img_face_encoding = face_recognition.face_encodings(img_image)[0]
20 # Create arrays of known face encodings and their names

And here is the error I am getting.

>>> %Run Drowsiness_detector.py
Traceback (most recent call last):
  File "/home/pi/Desktop/Pi_project/Drowsiness_detector.py", line 19, in <module>
    img_face_encoding = face_recognition.face_encodings(img_image)[0]
IndexError: list index out of range

 

 

sir can you help me with this error 
"Traceback (most recent call last):
  File "/home/pi/Desktop/eye /eyedetection.py", line 38, in <module>
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
cv2.error: OpenCV(4.1.0) /home/pi/opencv-python/opencv/modules/imgproc/src/resize.cpp:3718: error: (-215:Assertion failed) !ssize.empty() in function 'resize' "

what i change in file to download my image 
thank you for your time 

 

VIDEOIO ERROR: V4L: can't open camera by index 0
Traceback (most recent call last):
  File "/home/pi/driver drawsiness.py", line 18, in <module>
    img_image = face_recognition.load_image_file("img.jpg")
  File "/home/pi/.local/lib/python3.7/site-packages/face_recognition/api.py", line 86, in load_image_file
    im = PIL.Image.open(file)
  File "/usr/lib/python3/dist-packages/PIL/Image.py", line 2634, in open
    fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'img.jpg'
>>> 

i grt this error please tell me how add img.jpg file and whare to add

 

Error no attribute

Hello, I followed all the steps needed for running the code & I'm getting this error

Traceback (most recent call last):
  File "sleep.py", line 56, in <module>
    direction= eye_game.get_eyeball_direction(file)
AttributeError: module 'eye_game' has no attribute 'get_eyeball_direction'
 

Can you please help me understand the issue? thanks!

Hello. I came across your project. Can you please explain these two blocks:

"known_face_encodings"

"known_face_names"

Why you gave your name in the block?

Is it because, whenever your picamera starts live streaming of you, your name going to get appear?