According to WHO ( World Health Organization ) combined effects of ambient air pollution and household air pollution is associated with 7 million premature deaths annually.
Air pollution is a pressing global issue that poses a significant threat to human health and the environment. Many scientific studies have linked particle pollution exposure to a variety of health problems, including early mortality among individuals with heart or lung disease . A nonfatal cardiac arrest exacerbated asthma due to an irregular heartbeat.
Particulate matter levels change over a 100 meter radius and therefore a lot of sensors are needed to map the Particulate matter levels of an entire city. There is a visible gap in the current requirement and supply of Particulate matter sensors, due to the high costs of sensor and land allocation.
Impact Statement
This is possibly world’s first camera-based air quality sensor system capable of detecting particulate pollution through real-time contour detection. By inviting citizen scientists to install these sensors indoors, my aim is to build a network that not only raises awareness but also empowers communities to combat air pollution. This has the potential to transform how air quality is monitored in low cost sensing equipment.
My objective with this project is to make a sensor to measure Air pollution using only Computer Vision and Machine Learning.
Hence, let me introduce my particulate matter detector. Unlike typical sensors which use a photo diode to measure pollution, this sensor uses the Maixduino's camera to individually detect each particle. I’ll explain how this works.
After researching different particle detection methods, I discovered two powerful techniques: dynamic light scattering and optical particle counting.
Components
1. Maixduino Development Board
2. 445-455 nm 5V DC Laser Module
3. 4010 5V DC Fan Module
4. Custom 3D Printed Enclosure.
5. Connection Cables
6. Power Supply , 1S 18650
To see the full demonstration video, click on the YouTube Video below
Designing the Chamber
I designed a custom 3D-printed chamber and added a 5V laser that directs a beam perpendicular to the camera. The chamber itself went through several design iterations.
At first, I experimented with different dimensions and materials to find the best setup for capturing particles with minimal light interference. I chose a completely enclosed, black 3D-printed chamber to ensure the laser light only illuminates particles directly in its path, preventing any unwanted reflections that could interfere with readings. Fine-tuning the alignment of the laser and camera was a critical step – even a minor misalignment could skew detection results.
Modifying the Camera Lens
Now, here’s the cool part: when a particle passes through the laser, it appears as a glowing spot on the camera. These particles are super small—about 1/10th the diameter of a human hair. To capture such fine particles, I modified the camera lens setup, increasing the sensor-to-lens distance by 8mm. This lets the camera zoom in more, so it can focus on a tiny 1mm x 1mm area. Modifying the lens also helped reduce distortions and sharpen the focus on individual particles, which was essential for reliable detection.
Here’s the math:
With a 1mm square base area and the Maixduino camera’s VGA resolution of 640x480 pixels, each pixel represents about 1.56 micrometers. So, when we detect a contour of 10 pixels, for example, we can calculate the size of each particle based on this resolution. With data on the airflow rate, we can estimate the concentration of different particle sizes passing through the detector. This math involved careful calibration of the airflow, the laser angle, and sensor distance to get an accurate size and count of each particle type.
Testing and Calibration Process
Testing was another crucial part of the project. To ensure accuracy, I had to run controlled tests with known air quality levels to calibrate the sensor. I compared the readings from my sensor with those from a standard ZH03 particulate matter sensor, adjusting the laser power and camera sensitivity until my readings were consistent. I also tested it under various environmental conditions, like different humidity and temperature levels, to see if these factors affected the readings and to compensate accordingly.
Deployment !
Finally after the calibration and model training the camera based Air pollution sensor is ready to provide readings of air pollution. I plan to further refine the image processing process and try to introduce newer variants with improved efficiency and reliability. The models are planned to go a benchmarking test by simultaneous operation with industry standard sensors for reliable deployment on large scale basis.
I hope you liked the demonstration of this novel idea, please make sure to provide your feedback along with any suggestions. Thanks again for going through the project, if you would like to contribute your time to the project, you can contact me on my socials
For all the code, follow the GitHub link below:
import sensor
import image
import lcd
import time
import gc
lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time=2000)
min_blob_size = 1
max_blob_size = 1000
frame_count = 0
previous_frame = sensor.snapshot()
chamber_volume = 5 # cm^3
fan_speed = 600 # rpm
while True:
gc.collect()
current_frame = sensor.snapshot()
difference = current_frame.difference(previous_frame)
previous_frame = current_frame.copy()
blobs = difference.find_blobs([(0, 100, 0, 0, 0, 0, 0, 0)], pixels_threshold=1, area_threshold=1)
total_contour_size = 0
total_intensity = 0
valid_blob_count = 0
for b in blobs:
blob_area = b.w() * b.h()
if min_blob_size <= blob_area <= max_blob_size:
difference.draw_rectangle(b[0:4])
difference.draw_cross(b[5], b[6])
pixel_values = difference.copy().crop(b[0:4]).get_statistics()
mean_intensity = (pixel_values.l_mean() + pixel_values.a_mean() + pixel_values.b_mean()) / 3
total_contour_size += blob_area
total_intensity += mean_intensity
valid_blob_count += 1
if valid_blob_count > 0:
avg_contour_size = total_contour_size / valid_blob_count
avg_intensity = total_intensity / valid_blob_count
else:
avg_contour_size = 0
avg_intensity = 0
derived_input = (avg_contour_size * avg_intensity * chamber_volume) / fan_speed
model_output = regression_model.predict(derived_input)
pm_1 = calculate_pm(model_output, 1)
pm_2_5 = calculate_pm(model_output, 2.5)
pm_10 = calculate_pm(model_output, 10)
lcd.display(difference)
print("Frame: {}, Avg Contour Size: {}, Avg Intensity: {}, PM1: {}, PM2.5: {}, PM10: {}".format(
frame_count, avg_contour_size, avg_intensity, pm_1, pm_2_5, pm_10))
frame_count += 1
if frame_count % 5 == 0:
del current_frame
del difference
del blobs
gc.collect()
previous_frame = sensor.snapshot()
def calculate_pm(model_output, pm_size):
scaling_factor = 0.85
particle_density = 1.2
pm_value = scaling_factor * (model_output ** (1 + (0.5 * pm_size))) / (particle_density * pm_size)
return pm_value