Our motivation for building this project is to address the increasing traffic congestion in urban areas, which leads to delays, higher fuel consumption, and increased emissions. Traditional traffic light systems operate on fixed timing schedules, failing to adapt to real-time traffic conditions. Our system provides an intelligent, adaptive solution that improves traffic efficiency and safety while being cost-effective and scalable.
Proud Achievements:
1.Efficient Real-Time Performance: Achieving real-time processing and decision-making on an edge device without relying on external servers is a significant accomplishment. It demonstrates the potential of using affordable hardware for complex AI tasks.
2.Contribution to Smart City Development: This project is a step towards smarter, more connected urban infrastructures, demonstrating how technology can be leveraged to solve real-world problems effectively.
Components Required
Sipeed Maixduino Kit for RISC-V AI + IoT
2 x Red LED
2 x Green LED
4 x 330 Ohm Resistor
Circuit Diagram
Steps For Implementing The Smart Traffic Light System With The Sipeed Maixduino Kit:
1. Setting Up The Environment:
Install Required Tools:
Install the MaixPy IDE or PlatformIO for development.
Install necessary libraries and tools for image processing and AI model deployment on the Maixduino.
2.Prepare the Dataset:
Download the Highway Traffic Videos Dataset.
Extract the dataset to a working directory.
3.Set Up Jupyter Notebook:
You can use Jupyter Notebook to preprocess the dataset and train your model.
Ensure you have the following Python libraries installed:
bash
pip install numpy pandas matplotlib opencv-python tensorflow
2. Preprocessing The Dataset:
Extract Frames from Videos:
Use OpenCV to extract frames from each video in the dataset, and resize them to a fixed size (e.g., 224x224 pixels) suitable for training.
python
import cv2
import os
import numpy as np
def extract_frames(video_path, output_dir, frame_rate=1):
video = cv2.VideoCapture(video_path)
success, frame = video.read()
count = 0
while success:
if count % frame_rate == 0:
frame_resized = cv2.resize(frame, (224, 224))
frame_name = os.path.join(output_dir, f"frame_{count}.jpg")
cv2.imwrite(frame_name, frame_resized)
success, frame = video.read()
count += 1
video.release()
video_files = [f for f in os.listdir('path_to_videos')
if f.endswith('.mp4')]
for video in video_files:
extract_frames(f'path_to_videos/{video}', 'path_to_frames')
Label the Frames:
Assign labels such as "low", "medium", and "high" to the frames based on traffic density. If labels are not available, you may need to manually annotate the frames or use a tool like LabelImg.
3. Training The CNN Model:
Build the Model in Jupyter Notebook:
Use TensorFlow or PyTorch to build and train a CNN model for traffic density classification. Here’s an example with TensorFlow:
python
import tensorflow as tf
from tensorflow.keras.models
import Sequential
from tensorflow.keras.layers
import Conv2D, MaxPooling2D, Flatten, Dense
# Build a simple CNN model
mode
l = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3)),
MaxPooling2D((2, 2)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D((2, 2)),
Conv2D(128, (3, 3), activation='relu'),
MaxPooling2D((2, 2)),
Flatten(),
Dense(128, activation='relu'),
Dense(3, activation='softmax') # 3 classes: low, medium, high
])model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=10, validation_data=(X_val, y_val))
Export the Model:
Once the model is trained, convert it to a format compatible with the K210 chip (e.g., .kmodel format). Use the NNCase tool to convert the trained TensorFlow model to a .kmodel.
bash
Copy code
nncase --input model.tflite --output model.kmodel --dataset path_to_dataset --target k210
4. Deploying The Model On Sipeed Maixduino:
1.Upload the Model:
Upload the .kmodel
file to the Maixduino. You can use an SD card or directly through the MaixPy IDE.
2.Write the Firmware:
Write a MicroPython script on the Maixduino to use the camera and perform real-time inference using the deployed model. The script should read the camera feed, pass it through the model, and control the LEDs based on the classification result.
python
import sensor, image, lcd, time
from Maix
import KPU
import machine
lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.run(1)
# Load the model
task = KPU.load("/sd/model.kmodel")
# Initialize LEDs
red_led = machine.Pin(12, machine.Pin.OUT)
green_led = machine.Pin(13, machine.Pin.OUT)
while True:
img = sensor.snapshot()
lcd.display(img)
# Run inference
output = KPU.run_yolo2(task, img)
if output:
for obj in output:
# Assuming classes are low=0, medium=1, high=2
if obj.classid() == 0: # Low traffic
red_led.value(0)
green_led.value(1)
elif obj.classid() == 1: # Medium traffic
red_led.value(1)
green_led.value(0)
elif obj.classid() == 2: # High traffic
red_led.value(1)
green_led.value(0)
time.sleep(1)