Human Following Robot Using Arduino and Ultrasonic Sensor

Submitted by Gourav Tak on

Working of Human Following Robot Using Arduino

In recent years, robotics has witnessed significant advancements, enabling the creation of intelligent machines that can interact with the environment. One exciting application of robotics is the development of human-following robots. These robots can track and follow a person autonomously, making them useful in various scenarios like assistance in crowded areas, navigation support, or even as companions. In this article, we will explore in detail how to build a human following robot using Arduino and three ultrasonic sensors, complete with circuit diagrams and working code. Also, check all the Arduino-based Robotics projects by following the link.

The working of a human following robot using Arduino code and three ultrasonic sensors is an interesting project. What makes this project particularly interesting is the use of not just one, but three ultrasonic sensors. This adds a new dimension to the experience, as we typically see humans following a robot built with one ultrasonic, two IR, and one servo motor.  This servo motor has no role in the operation and also adds unnecessary complications. So I removed this servo and the IR sensors and used 3 ultrasonic sensors. With ultrasonic sensors, you can measure distance and use that information to navigate and follow a human target. Here’s a general outline of the steps involved in creating such a robot.

 

 

Components Needed for Human Following Robot Using Arduino

  • Arduino UNO board ×1

  • Ultrasonic sensor ×3

  • L298N motor driver ×1

  • Robot chassis

  • BO motors ×2

  • Wheels ×2

  • Li-ion battery 3.7V ×2

  • Battery holder ×1

  • Breadboard

  • Ultrasonic sensor holder ×3

  • Switch and jumper wires

Human Following Robot Using Arduino Circuit Diagram

Here is the schematic diagram of a Human-following robot circuit.

Arduino Human Following Robot Circuit Diagram

This design incorporates three ultrasonic sensors, allowing distance measurements in three directions front, right, and left. These sensors are connected to the Arduino board through their respective digital pins. Additionally, the circuit includes two DC motors for movement, which are connected to an L298N motor driver module. The motor driver module is, in turn, connected to the Arduino board using its corresponding digital pins. To power the entire setup, two 3.7V li-ion cells are employed, which are connected to the motor driver module via a switch.

Overall, this circuit diagram showcases the essential components and connections necessary for the Human-following robot to operate effectively.

arduino human following robot circuit

Circuit Connection:

Arduino and HC-SR04 Ultrasonic Sensor Module:

HC-SR04 Ultrasonic sensor Module

  • Connect the VCC pin of each ultrasonic sensor to the 5V pin on the Arduino board.

  • Connect the GND pin of each ultrasonic sensor to the GND pin on the Arduino board.

  • Connect the trigger pin (TRIG) of each ultrasonic sensor to separate digital pins (2,4, and 6) on the Arduino board.

  • Connect the echo pin (ECHO) of each ultrasonic to separate digital pins (3,5, and 7) on the Arduino board.

Arduino and Motor Driver Module:

  • Connect the digital output pins of the Arduino (digital pins 8, 9, 10, and 11) to the appropriate input pins (IN1, IN2, IN3, and IN4) on the motor driver module.

  • Connect the ENA and ENB pins of the motor driver module to the onboard High state pin with the help of a female header.

  • Connect the OUT1, OUT2, OUT3, and OUT4 pins of the motor driver module to the appropriate terminals of the motors.

  • Connect the VCC (+5V) and GND pins of the motor driver module to the appropriate power (Vin) and ground (GND) connections on the Arduino.

Power Supply:

  • Connect the positive terminal of the power supply to the +12V input of the motor driver module.

  • Connect the negative terminal of the power supply to the GND pin of the motor driver module.

  • Connect the GND pin of the Arduino to the GND pin of the motor driver module.

Human Following Robot Using Arduino Code

Here is a simple 3 Ultrasonic sensor-based Human following robot using Arduino Uno code that you can use for your project.

Ultrsonic Sensors on Robot

This code reads the distances from three ultrasonic sensors (‘frontDistance’, ‘leftDistance’, and ‘rightDistance’). It then compares these distances to determine the sensor with the smallest distance. If the smallest distance is below the threshold, it moves the car accordingly using the appropriate motor control function (‘moveForward()’, ‘turnLeft()’, ‘turnRight()’). If none of the distances are below the threshold, it stops the motor using ‘stop()’.

In this section, we define the pin connections for the ultrasonic sensors and motor control. The S1Trig, S2Trig, and S3Trig, variables represent the trigger pins of the three ultrasonic sensors, while S1Echo, S2Echo, and S3Echo, represent their respective echo pins.

The LEFT_MOTOR_PIN1, LEFT_MOTOR_PIN2, RIGHT_MOTOR_PIN1, and RIGHT_MOTOR_PIN2 variables define the pins for controlling the motors.

The MAX_DISTANCE and MIN_DISTANCE_BACK variables set the thresholds for obstacle detection.

// Ultrasonic sensor pins
#define S1Trig 2
#define S2Trig 4
#define S3Trig 6
#define S1Echo 3
#define S2Echo 5
#define S3Echo 7
// Motor control pins
#define LEFT_MOTOR_PIN1 8
#define LEFT_MOTOR_PIN2 9
#define RIGHT_MOTOR_PIN1 10
#define RIGHT_MOTOR_PIN2 11
// Distance thresholds for obstacle detection
#define MAX_DISTANCE 40
#define MIN_DISTANCE_BACK 5

Make sure to adjust the values of ‘MIN_DISTANCE_BACK’ and ‘MAX_DISTANCE’ according to your specific requirements and the characteristics of your robot.

The suitable values for ‘MIN_DISTANCE_BACK’ and ‘MAX_DISTANCE’ depend on the specific requirements and characteristics of your human-following robot. You will need to consider factors such as the speed of your robot, the response time of the sensors, and the desired safety margin

Here are some general guidelines to help you choose suitable values.

MIN_DISTANCE_BACK’ This value represents the distance at which the car should come to a stop when an obstacle or hand is detected directly in front. It should be set to a distance that allows the car to back safely without colliding with the obstacle or hand. A typical value could be around 5-10 cm.

MAX_DISTANCE’ This value represents the maximum distance at which the car considers the path ahead to be clear and can continue moving forward. It should be set to a distance that provides enough room for the car to move without colliding with any obstacles or hands. If your hand and obstacles are going out of this range, the robot should be stop. A typical value could be around 30-50 cm.

These values are just suggestions, and you may need to adjust them based on the specific characteristics of your robot and the environment in which it operates.

These lines set the motor speed limits. ‘MAX_SPEED’ denotes the upper limit for motor speed, while ‘MIN_SPEED’ is a lower value used for a slight left bias. The speed values are typically within the range of 0 to 255, and can be adjusted to suit our specific requirements.

// Maximum and minimum motor speeds
#define MAX_SPEED 150
#define MIN_SPEED 75

The ‘setup()’ function is called once at the start of the program. In the setup() function, we set the motor control pins (LEFT_MOTOR_PIN1, LEFT_MOTOR_PIN2, RIGHT_MOTOR_PIN1, RIGHT_MOTOR_PIN2) as output pins using ‘pinMode()’ . We also set the trigger pins (S1Trig, S2Trig, S3Trig) of the ultrasonic sensors as output pins and the echo pins (S1Echo, S2Echo, S3Echo) as input pins. Lastly, we initialize the serial communication at a baud rate of 9600 for debugging purposes.

void setup() {
  // Set motor control pins as outputs
  pinMode(LEFT_MOTOR_PIN1, OUTPUT);
  pinMode(LEFT_MOTOR_PIN2, OUTPUT);
  pinMode(RIGHT_MOTOR_PIN1, OUTPUT);
  pinMode(RIGHT_MOTOR_PIN2, OUTPUT);
  //Set the Trig pins as output pins
  pinMode(S1Trig, OUTPUT);
  pinMode(S2Trig, OUTPUT);
  pinMode(S3Trig, OUTPUT);
  //Set the Echo pins as input pins
  pinMode(S1Echo, INPUT);
  pinMode(S2Echo, INPUT);
  pinMode(S3Echo, INPUT);
  // Initialize the serial communication for debugging
  Serial.begin(9600);
}

This block of code consists of three functions (‘sensorOne()’, ‘sensorTwo()’, ‘sensorThree()’) responsible for measuring the distance using ultrasonic sensors.

The ‘sensorOne()’ function measures the distance using the first ultrasonic sensor. It's important to note that the conversion of the pulse duration to distance is based on the assumption that the speed of sound is approximately 343 meters per second. Dividing by 29 and halving the result provides an approximate conversion from microseconds to centimeters.

The ‘sensorTwo()’ and ‘sensorThree()’ functions work similarly, but for the second and third ultrasonic sensors, respectively.

// Function to measure the distance using an ultrasonic sensor
int sensorOne() {
  //pulse output
  digitalWrite(S1Trig, LOW);
  delayMicroseconds(2);
  digitalWrite(S1Trig, HIGH);
  delayMicroseconds(10);
  digitalWrite(S1Trig, LOW);
  long t = pulseIn(S1Echo, HIGH);//Get the pulse
  int cm = t / 29 / 2; //Convert time to the distance
  return cm; // Return the values from the sensor
}
//Get the sensor values
int sensorTwo() {
  //pulse output
  digitalWrite(S2Trig, LOW);
  delayMicroseconds(2);
  digitalWrite(S2Trig, HIGH);
  delayMicroseconds(10);
  digitalWrite(S2Trig, LOW);
  long t = pulseIn(S2Echo, HIGH);//Get the pulse
  int cm = t / 29 / 2; //Convert time to the distance
  return cm; // Return the values from the sensor
}
//Get the sensor values
int sensorThree() {
  //pulse output
  digitalWrite(S3Trig, LOW);
  delayMicroseconds(2);
  digitalWrite(S3Trig, HIGH);
  delayMicroseconds(10);
  digitalWrite(S3Trig, LOW);
  long t = pulseIn(S3Echo, HIGH);//Get the pulse
  int cm = t / 29 / 2; //Convert time to the distance
  return cm; // Return the values from the sensor
}

In this section, the ‘loop()’ function begins by calling the ‘sensorOne()’, ‘sensorTwo()’, and ‘sensorThree()’ functions to measure the distances from the ultrasonic sensors. The distances are then stored in the variables ‘frontDistance’, ‘leftDistance’, and ‘rightDistance’.

Next, the code utilizes the ‘Serial’ object to print the distance values to the serial monitor for debugging and monitoring purposes.

void loop() {
  int frontDistance = sensorOne();
  int leftDistance = sensorTwo();
  int rightDistance = sensorThree();
  Serial.print("Front: ");
  Serial.print(frontDistance);
  Serial.print(" cm, Left: ");
  Serial.print(leftDistance);
  Serial.print(" cm, Right: ");
  Serial.print(rightDistance);
  Serial.println(" cm");

In this section of code condition checks if the front distance is less than a threshold value ‘MIN_DISTANCE_BACK’ that indicates a very low distance. If this condition is true, it means that the front distance is very low, and the robot should move backward to avoid a collision. In this case, the ‘moveBackward()’ function is called.

if (frontDistance < MIN_DISTANCE_BACK) {
    moveBackward();
    Serial.println("backward");

If the previous condition is false, this condition is checked. if the front distance is less than the left distance, less than the right distance, and less than the ‘MAX_DISTANCE’ threshold. If this condition is true, it means that the front distance is the smallest among the three distances, and it is also below the maximum distance threshold. In this case, the ‘moveForward()’ function is called to make the car move forward.

else if (frontDistance < leftDistance && frontDistance < rightDistance && frontDistance < MAX_DISTANCE) {
    moveForward();
    Serial.println("forward");

If the previous condition is false, this condition is checked. It verifies if the left distance is less than the right distance and less than the ‘MAX_DISTANCE’ threshold. This condition indicates that the left distance is the smallest among the three distances, and it is also below the minimum distance threshold. Therefore, the ‘turnLeft()’ function is called to make the car turn left.

else if (leftDistance < rightDistance && leftDistance < MAX_DISTANCE) {
    turnLeft();
    Serial.println("left");

If neither of the previous conditions is met, this condition is checked. It ensures that the right distance is less than the ‘MAX_DISTANCE’ threshold. This condition suggests that the right distance is the smallest among the three distances, and it is below the minimum distance threshold. The ‘turnRight()’ function is called to make the car turn right.

else if (rightDistance < MAX_DISTANCE) {
    turnRight();
    Serial.println("right");

If none of the previous conditions are true, it means that none of the distances satisfy the conditions for movement. Therefore, the ‘stop()’ function is called to stop the car.

 else {
    stop();
    Serial.println("stop");

In summary, the code checks the distances from the three ultrasonic sensors and determines the direction in which the car should move based on the 3 ultrasonic sensors with the smallest distance.

 

Important aspects of this Arduino-powered human-following robot project include:

  • Three-sensor setup for 360-degree human identification
  • Distance measurement and decision-making in real-time
  • Navigation that operates automatically without human assistance
  • Avoiding collisions and maintaining a safe following distance

 

 

Technical Summary and GitHub Repository 

Using three HC-SR04 ultrasonic sensors and an L298N motor driver for precise directional control, this Arduino project shows off the robot's ability to track itself. For simple replication and modification, the full source code, circuit schematics, and assembly guidelines are accessible in our GitHub repository. To download the Arduino code, view comprehensive wiring schematics, and participate in the open-source robotics community, visit our GitHub page.

Code Schematics Download Icon

 

Frequently Asked Questions

⇥ How does an Arduino-powered human-following robot operate?
Three ultrasonic sensors are used by the Arduino-powered human following robot to determine a person's distance and presence. After processing this data, the Arduino manages motors to follow the identified individual while keeping a safe distance.

⇥ Which motor driver is ideal for an Arduino human-following robot?
The most widely used motor driver for Arduino human-following robots is the L298N. Additionally, some builders use the L293D motor driver shield, which connects to the Arduino Uno directly. Both can supply enough current for small robot applications and manage 2-4 DC motors.

⇥ Is it possible to create a human-following robot without soldering?
Yes, you can use motor driver shields that connect straight to an Arduino, breadboards, and jumper wires to construct a human-following robot. For novices and prototyping, this method is ideal.

⇥ What uses do human-following robots have in the real world?
Shopping cart robots in malls, luggage-carrying robots in airports, security patrol robots, elderly care assistance robots, educational demonstration robots, and companion robots that behave like pets are a few examples of applications.

 

Conclusion

This human following robot using Arduino project and three ultrasonic sensors is an exciting and rewarding project that combines programming, electronics, and mechanics. With Arduino’s versatility and the availability of affordable components, creating your own human-following robot is within reach.

Human-following robots have a wide range of applications in various fields, such as retail stores, malls, and hotels, to provide personalized assistance to customers. Human-following robots can be employed in security and surveillance systems to track and monitor individuals in public spaces. They can be used in Entertainment and events, elderly care, guided tours, research and development, education and research, and personal robotics.

They are just a few examples of the applications of human-following robots. As technology advances and robotics continues to evolve, we can expect even more diverse and innovative applications in the future.

Explore Practical Projects Similar To Robots Using Arduino

Explore a range of hands-on robotics projects powered by Arduino, from line-following bots to obstacle-avoiding vehicles. These practical builds help you understand sensor integration, motor control, and real-world automation techniques. Ideal for beginners and hobbyists, these projects bring theory to life through interactive learning.

 Simple Light Following Robot using Arduino UNO

Simple Light Following Robot using Arduino UNO

Today, we are building a simple Arduino-based project: a light-following robot. This project is perfect for beginners, and we'll use LDR sensor modules to detect light and an MX1508 motor driver module for control. By building this simple light following robot you will learn the basics of robotics and how to use a microcontroller like Arduino to read sensor data and control motors.

Line Follower Robot using Arduino UNO: How to Build (Step-by-Step Guide)

Line Follower Robot using Arduino UNO: How to Build (Step-by-Step Guide)

This step-by-step guide will show you how to build a professional-grade line follower robot using Arduino UNO, with complete code explanations and troubleshooting tips. Perfect for beginners and intermediate makers alike, this project combines hardware interfacing, sensor calibration, and motor control fundamentals.

Have any question related to this Article?

Low-Cost Offline Voice Recognition Module Alternatives to VC-02

Submitted by Vedhathiri on

Voice control technology has become an important part of modern human-machine interaction. It allows users to control electronic devices and systems using simple spoken commands instead of traditional input methods such as buttons, switches, or touch screens. This type of interaction makes devices easier to use, more accessible, and more convenient in many applications such as smart homes, automation systems, and assistive technologies. Many existing voice recognition systems depend on cloud-based processing. In these systems, the user’s voice is recorded and sent to a remote server through the internet, where the voice is processed and converted into commands. While this method can provide powerful voice recognition capabilities, it also introduces several limitations. These systems require a constant internet connection, and if the network connection is slow or unavailable, the system may not work properly. Cloud-based processing can also cause delays (latency) in response time and may raise privacy concerns, since voice data is transmitted and processed on external servers.

To overcome these challenges, offline voice recognition modules have been developed. These modules are designed to process and recognize voice commands directly on the device without needing any internet connection. This makes the system faster, more reliable, and more secure, since the voice data remains within the local device. Offline voice recognition is especially useful in embedded systems, automation projects, and environments where internet access may not always be available.

In this project, an offline voice command system is implemented using the SU-03T Offline Voice Recognition Module. The VC-02 is an official module by Ai-Thinker, offering well-defined firmware, proper documentation, and SDK support for customizing and training voice commands, making it suitable for advanced development. In contrast, the SU-03T is a more generic module produced by various manufacturers, and it is preferred due to its low cost, making it an economical choice for simple voice control applications. In this system, when the user speaks a command, the SU-03T processes the voice input and compares it with its stored command set. If a match is detected, the module triggers the corresponding action. In this project, the recognised voice commands are used to control LEDs, turning them on or off. Also check out ESP32 Offline Voice Recognition System using Edge Impulse, which provides hands-on experience in edge AI and TinyML deployment on microcontrollers. This guide is based on hands-on testing with the SU-03T offline voice recognition module at the Circuit Digest lab. The SU-03T is used here as a practical, low-cost alternative to the VC-02 for offline voice command projects. Offline voice recognition modules solve this problem by processing and recognising voice commands entirely on-device, with no internet connection required. In this project, we implement an offline voice command system using the SU-03T Offline Voice Recognition Module, one of the cheapest alternatives to the VC-02/VC020 on the market today.

Overview 

Our tutorial has been created using the SU-03T Offline Voice Recognition Module at the Circuit Digest lab for real-time testing. The SU-03T is used here as a practical, low-cost alternative to the VC-02 for offline voice command projects. The offline voice command modules have been designed to allow you to use an offline voice recognition module with no requirement for an internet connection for the voice command to be processed and recognised. In this project, we implement an offline voice command system using the SU-03T Offline Voice Recognition Module, one of the cheapest alternatives to the VC-02 on the market today.

SU-03T vs VC-02 – Quick Comparison

If you're evaluating an alternative offline voice module for the VC-02 or VC020, the table below summarises the key differences to help you choose the right IC for your project:

FeatureSU-03TVC-02 (Ai-Thinker)
Internet required?NoNo
Price (approx.)Very low (generic)Low (branded)
SDK / Firmware toolAi-Thinker SDK portalAi-Thinker SDK portal
English command supportYes (via Ai-Thinker SDK)Yes
GPIO output controlYesYes
PWM supportYesYes
Wake-free commandsUp to 10Up to 10
Documentation qualityLimitedGood
Best suited forBudget projects, prototypesProduction, advanced dev

Components Required

The components which are listed below are the ones required to build the complete setup. All items are widely available from electronics distributors such as DigiKey, Robu.in, and AliExpress.

S.NOComponentsQuantity                             Purpose
1.SU-03T1It is the main module used in the setup
2.Mic1Used to recognize the commands from the user
3.Speaker1Used to reply with the predefined reply words
4.USB to Serial Converter1Used to deploy the code to the module
5.LED(Green and Red)2(Each 1)For observing the output
6.100 Ohms Resistor2For resisting the current
7.Breadboard1Used for the temporary connection between components
8.Jumper WiresRequired amountUsed to connect all the components

Circuit Diagram

The circuit diagram shows the connection of the microphone and speaker to the voice module, along with LEDs connected to its GPIO pins via resistors. It also includes the USB-to-TTL interface for firmware uploading and communication. The circuit diagram shows the complete hardware connections for this offline voice recognition project.

Circuit diagram showing SU-03T offline voice recognition module connected to microphone, speaker, two LEDs with 100Ω resistors, and USB-to-TTL converter Circuit diagram of the SU-03T offline voice recognition module with microphone, speaker, LEDs, and USB-to-TTL interface

Pin Connection Summary

SU-03T Pin        Connects To                           Notes
VCC (3.3 V)USB-to-TTL 3.3 V outputDo not exceed 3.3 V; module is not 5 V tolerant
GNDCommon ground (USB-TTL)Shared ground for all components
TXUSB-to-TTL RXUART communication/firmware flashing
RXUSB-to-TTL TXUART communication firmware flashing
MIC+ / MIC−Electret microphoneDifferential analog audio input
SPK+ / SPK−8 Ω speakerBuilt-in amplifier output
GPIO1Green LED → 100 Ω → GNDControlled by the "Turn on LED" command
GPIO2Red LED → 100 Ω → GNDControlled by the "Turn off LED" command

Hardware Connection for the Offline Voice Recognition Module   

The SU-03T Offline Voice Recognition Module is connected to a USB-to-Serial converter for power supply and programming. A microphone and speaker are interfaced with the module to handle voice input and audio output. The GPIO pins of the module are connected to LEDs through current-limiting resistors to perform output actions.

Physical hardware connection of SU-03T offline voice module showing breadboard wiring with microphone, speaker, LEDs, resistors, and USB-to-TTL converter Hardware connections for the SU-03T module: microphone (audio input), speaker (audio output), LEDs with resistors (GPIO output), and USB-to-TTL (power and programming)

How the SU-03T Offline Voice Recognition Module Works

The working of this project is based on the SU-03T Offline Voice Recognition Module, which is designed to recognize voice commands without requiring an internet connection. The module is connected to a mic, speaker and an internal processor that can analyse voice inputs and match them with predefined commands stored in its memory. Before using the module, the required voice commands must be configured and loaded into the module. 

Once the commands are configured and uploaded to the module, the SU-03T continuously listens for voice input through the microphone. When a user speaks a command, the module captures the audio signal and converts it into digital data. The internal voice recognition engine then processes this signal and compares it with the stored voice command patterns. If the spoken command matches one of the predefined commands, the module identifies it and immediately triggers the corresponding action. The module then controls the GPIO output pins connected to external components such as LEDs. Actually, there is a website called https://smartpi.cn/#/ where we can flash the SU-03T. However, it has a limitation, it only accepts correct Chinese words, and English words are often ignored. So, we are using the steps and website given below to flash our module. If you have time, you can explore that page for future use.

Step-by-Step: Configure and Flash the SU-03T Offline Voice Recognition Module

This same workflow is compatible with the VC-02 and serves as the recommended offline voice recognition SDK configuration process for all Ai-Thinker-compatible modules.

Step 1: Register on the Ai-Thinker Voice SDK Portal

http://voice.ai-thinker.com/#/SdkVersionList

Click the website (translate it to English)and log in to the website if you don’t have an account. Register for the account after that, you can see “Create the product” in the top left corner, click that.

Ai-Thinker voice SDK portal home page showing the Create Product button in the top-left corner Ai-Thinker voice SDK portal – the starting point for generating offline voice recognition firmware for the SU-03T and VC-02

Step 2: Create a Pure Offline Product Profile
In that, click on other products and select the scene as “Pure Offline”, Module as “VC-02”, then give any name for the product and language as English, now click save.

Ai-Thinker SDK product creation dialog with Pure Offline scene, VC-02 module, and English language selected Creating a Pure Offline product profile in the Ai-Thinker SDK – select VC-02 as the module type for SU-03T compatibility

Step 3: Review Pin and SDK Configuration
After the previous step, it will take you to the voice SDK section, where you need to set the configurations for pins and also set the commands. No need to change anything in the pin configuration section.

Ai-Thinker SDK voice configuration page showing GPIO pin settings and SDK options for the offline voice module Voice SDK configuration page – GPIO pin assignments and module settings for the offline voice recognition system

Step 4: Configure the Wake Word
In the custom wake word section, you can set any wake word you prefer, like “Hai” or “hello”, and you also need to set the wake-up reply like “hello buddy” or “hello Circuit Digest”.

Wake word configuration section in the Ai-Thinker SDK showing the input field for the wake word and wake-up reply text Wake word setup in the Ai-Thinker SDK – choose a short, phonetically distinct word for reliable offline wake word detection

Step 5: Add Offline Voice Recognition Commands
Set the behaviour words as “turnonled” or “turnoffled” like this for then for the command words give the words which you prefer like “Turn on led” or “lights on” like this also give the appropriate reply sentence like “turning on the led” or “turning lights off”. Near the basic information tab, you can see the control details tab. Click that, and see the configuration as per your requirement, like low or high. Here, you can also set the pulse.

Offline voice recognition command control settings showing behaviour word, command phrase, reply text, and GPIO HIGH/LOW configuration Command configuration: set the behaviour word, user-spoken phrase, audio reply, and GPIO output action for each offline voice recognition command

Step 6: Configure Wake-Free Commands
After setting all configurations, scroll down, and you can see the wake-free commands section, where we can set only 10 wake-free command words. After that, we need to tell the wake-up word first and use the command after that so we can able to select which and all commands are wake-free commands in this by selecting.

Wake-free command selection table in the Ai-Thinker SDK showing checkboxes to enable up to 10 commands that work without a wake word Wake-free command configuration – select up to 10 offline voice commands that can be triggered without repeating the wake word

Step 7: Select Voice Actor and Audio Settings
After that, you can set your preferred voice actor in the voice actor configuration section and also able to set the brightness of the voice, speed, and volume.

Voice actor configuration page in Ai-Thinker SDK showing voice selection dropdown and sliders for volume, speed, and brightness Voice actor and TTS audio configuration – customise volume, speech rate, and tone for the module's spoken replies

Step 8: Add Startup Announcement and Exit Commands
In the other configurations section, you can add the startup announcement, exit reply, voluntary withdrawal exit command and exit reply as per your need.

Other configurations section showing startup announcement text field and voluntary withdrawal exit command settings in Ai-Thinker SDK Startup announcement and exit command configuration for the SU-03T offline voice module

Step 9: Generate Firmware
After setting all things, click the generate a new version and give a description for it. After that, it will take you to the voice SDK section, where you can see your product. Click the generate SDK " tab. It will generate your SDK or firmware within 30 to 35 minutes max. Now download the firmware it generated and extract the file.

Ai-Thinker SDK firmware generation section showing the Generate New Version button and the firmware download list Firmware generation section in the Ai-Thinker SDK – allow 30–35 minutes for compilation before downloading the offline voice firmware

Step 10: Download the UniOneUpdateTool Flash Utility

Download the Unicommon.dll, UniCommunicateSwitch.dll and UniOneUpdateTool.exe from Hummingbird-M-Update-Tool V1.0

After installing, click the UniOneUpdateTool.exe and, as per the circuit diagram, connect all the components. Then connect the USB to TTL to the USB port of the laptop. Now you can see the port COM appear in the window of the UniOneUpdateTool.

Download page for UniOneUpdateTool (Hummingbird-M-Update-Tool V1.0) showing Unicommon.dll, UniCommunicateSwitch.dll, and UniOneUpdateTool.exe files Download the UniOneUpdateTool flash utility – three files are required: Unicommon.dll, UniCommunicateSwitch.dll, and UniOneUpdateTool.exe

Step 11: Flash Firmware to the SU-03T Module

UniOneUpdateTool window showing the bin file selection and burn button for flashing firmware to the SU-03T offline voice recognition module Flashing firmware using UniOneUpdateTool – select the uni_app_release_update.bin file, click Burn, and wait for all ports to turn yellow before re-powering the SU-03T

In the UniOneUpdateTool window, you can see the option like this 选择(Choose). Click this option and go to the extracted firmware folder, select the uni_app_release_update.bin file, then you can see the 烧录(Programming/Burning. Click this and wait till all ports are filled with yellow. When all is finished, remove the power pin jumper from the USBs to TTL converter, then again insert it. Now you can see the firmware is getting flashed into the module. Also, spare some time to take a look at our electronics projects to get more ideas in the field of electronics

Working Demo of  Offline Voice Recognition Module

Applications of Offline Voice Recognition Modules

1. Smart Home Automation
Offline voice recognition modules can be used to control home appliances such as lights, fans, and other electronic devices using voice commands. This allows users to operate devices easily without using switches or mobile applications.
2. Assistive Technology
Voice-controlled systems can help elderly people and individuals with physical disabilities control electronic devices more conveniently. Simple voice commands can allow them to turn lights on or off without needing physical interaction.
3. Industrial Automation
In industrial environments, voice control can be used to operate certain machines or indicators where manual operation may be difficult. Offline voice systems improve reliability since they do not depend on internet connectivity.
4. Automotive Control Systems
Offline voice recognition can be integrated into vehicles to control features such as lights, music systems, or navigation functions. This allows drivers to operate systems hands-free, improving safety and convenience. Low-cost
5. Educational and Embedded System Projects
Offline voice modules are widely used in educational projects and research to demonstrate voice-based human-machine interaction

Application           How Offline Voice Recognition HelpsDevices Controlled
Smart Home AutomationLocal control without cloud dependency; works during internet outagesLights, fans, curtains, sockets
Assistive TechnologyEnables hands-free device control for elderly and differently-abled usersLamps, TV, door locks
Industrial AutomationReliable in offline factory environments; no latency from cloud callsIndicators, alarms, and conveyors
Automotive SystemsIn-car voice control without mobile data; instant responseLighting, HVAC, infotainment
Educational & Maker ProjectsLow-cost entry point for voice HMI projects; no API keys neededLEDs, servos, buzzers, displays

Troubleshooting the SU-03T Offline Voice Recognition Module

Issue 1: Voice command is not recognised
Solution:

This may occur if the spoken command does not exactly match the predefined command stored in the module. Ensure that the command is spoken clearly and with proper pronunciation. Also, check whether the correct voice command dataset has been uploaded to the module using the official configuration tool.

Issue 2: The LED does not turn ON or OFF
Solution:

Check the wiring between the SU-03T module and the LED. Make sure the LED is connected to the correct GPIO pin with a current-limiting resistor. Also, verify that the output pin configuration in the software matches the actual hardware connection.

Issue 3: Module is not responding to voice input
Solution:

This can happen if the microphone is not detecting sound properly or if the module is not powered correctly. Ensure that the module receives the required power supply and that the microphone area is not blocked. Speaking closer to the module can also improve detection.

Issue 4: PWM control is not working properly
Solution:

If the LED brightness or motor speed does not change, verify that the PWM pin is correctly configured in the software. Check whether the PWM output pin is properly connected to the device and confirm that the duty cycle settings are correctly applied.

Issue 5: Module not detected while configuring through the computer
Solution:

Ensure that the USB-to-Serial converter or programming interface is properly connected. Install the correct drivers and verify that the correct COM port is selected in the configuration software. Restarting the software or reconnecting the module may also resolve the issue.

Future Enhancements

  • Multi-Device Control
    The system can be expanded to control multiple devices such as fans, motors, and home appliances using different voice commands.

  • communication/firmware Smartcommunication/firmware Home Integration
    It can be integrated with a complete smart home system to control lighting, security systems, and other automation devices.

  • Mobile Application Interface
    A mobile application can be added to monitor and control devices along with voice commands.

  • Motor and Appliance Control
    The system can be enhanced to control motors, pumps, and other electrical appliances using voice commands.

  • Custom Voice Command Expansion
    More voice commands can be added to increase the functionality and control more operations in the system.

Conclusion

This project demonstrates the implementation of a simple offline voice-controlled system using the SU-03T voice recognition module. The system shows how voice commands can be used to control electronic devices without requiring an internet connection. It highlights the advantages of offline voice recognition, such as faster response, improved reliability, and better privacy. By configuring voice commands through the Ai-Thinker offline voice recognition SDK and flashing the firmware with UniOneUpdateTool, you get a reliable, private, low-latency voice control system that requires zero internet connectivity. The project also shows how GPIO and PWM outputs can be used to control devices like LEDs through voice commands. Overall, the system provides a practical example of voice-based human-machine interaction in embedded systems. Such systems can be further expanded for automation and smart control applications in the future. We invite you to look into our projects like “Building a Voice Controlled Home Automation System with Arduino”, which focuses on voice-based appliance control, and “Voice Controlled Lights using Raspberry Pi”, which demonstrates smart lighting automation using speech commands and GPIO interfacing.

Frequently Asked Questions

⇥ Does the module require an internet connection to work?
No, the module works completely offline. All voice commands are processed inside the module, which makes the system faster and more reliable.

⇥ How are voice commands added to the module?
Voice commands can be configured and uploaded using the official configuration tools and SDK available on the platform provided by Ai-Thinker.

⇥ What are the main advantages of using an offline voice recognition module?
Offline voice recognition provides faster response time, improved privacy, and better reliability since it does not depend on internet connectivity.

⇥  Can the module control devices other than LEDs?
Yes, the module can control various devices such as motors, relays, fans, and other appliances through its GPIO pins, depending on the circuit design.

⇥  Is it possible to change or update the voice commands later?
Yes, voice commands can be modified or updated by reconfiguring the settings in the SDK platform and uploading the new firmware to the module.

⇥  How long does it take for the Ai-Thinker SDK portal to generate new firmware for the SU-03T?
After you click 'Generate New Version', the firmware will compile on an Ai-Thinker cloud server in about 30-35 minutes. At this point, you will be able to download the ZIP file (which contains the compiled firmware) in the SDK version list. After downloading the ZIP file, unzip it and locate the uni_app_release_update.bin file for the flashing process.

⇥  What's the recommended flashing tool for the SU-03T voice module?
The recommended flashing tool for the SU-03T is the UniOneUpdateTool (part of the Hummingbird M Update Tool V1.0 package). Connect your SU-03T to your computer through a USB to TTL converter, select the firmware .bin file in the tool, click Burn, wait for all ports to be yellow and then power cycle the SU-03T after the process completes.

Voice-Controlled Projects

Previously, we have explored several voice-controlled projects using platforms like Amazon Alexa and hardware such as ESP8266 and Raspberry Pi. If you want to learn more about these implementations, the links are provided below.

HomePod S3 - A Smart Desk companion

HomePod S3 - A Smart Desk companion

Offline smart desk companion with touch, voice, timer, to-do, medicine reminders and local web control powered by ESP32-S3.

 Alexa Voice Controlled LED using Raspberry Pi and ESP12

Alexa Voice Controlled LED using Raspberry Pi and ESP12

In this DIY tutorial, we are making an IoT project to control home appliances from anywhere in the world using AlexaPi and ESP12E.

 IOT based Voice Controlled Home Automation using ESP8266 and Android App

IOT-based Voice Controlled Home Automation using ESP8266 and Android App

Voice-controlled home automation using an ESP8266 Wi-Fi module, where you can control your Home AC appliances using your voice through an Android App from anywhere in the world.

Have any question related to this Article?