Smart Lamps

by xChocapic in Circuits > Raspberry Pi

10 Views, 0 Favorites, 0 Comments

Smart Lamps

frontcover.jpg
secondary1.jpg
secondary2.jpg
mainmodule.jpg

This project is about enabling users to seamlessly operate an AI powered light system that is highly scalable and fully customizable to one's needs. The form of the light system can also be altered as to be suitable for ones needs (desk lamps can be turned into ceiling lamps, ambient lights, lightbulbs etc.)


The project has one main module and two secondary ones. The main module is powered by a Raspberry PI 5 on which a Yolov8n model runs with input from a generic web camera. The Yolo model is able to recognize five gestures (fist, palm, v sign, like and dislike) that have been mapped to certain commands:

-fist: lamp selection, holding the fist gesture to the camera for more than two seconds start cycling between the lamps, releasing it would select the current lamp.

-palm: opens/closes the selected lamp

-v sign, like and dislike: preconfigured light modes for each lamp


After the main module handles the recognition, the script sends a message using the MQTT protocol over Wi-Fi to the expected lamp. The program for the main module has been split into two scripts, for increased modularity and easier control: one "publish" script that handles the gesture recognition and sends messages to the selected topic and one "subscribe" script that listens for messages over the main lamp topic on MQTT and lights up the lamp accordingly.


The two secondary lamp modules are powered by ESP32 microcontrollers that are connected over Wi-Fi with the main module. The secondary lamps each have unique id's and therefore they subscribe to their respective MQTT topics.


Supplies

The following electronics have been used in the project:

Raspberry Pi 5 - runs the recognition and sends signals to the secondary modules

ESP32 boards - can connect to Wi-Fi and listen to the commands sent by the main module

Pixel Ring - light source of the project, high customizing options

OLED screens - helps to display essential information on each lamp

Distance sensor - helps with gesture recognition (outputs the optimal distance (hand - camera) for the model)

Web camera - provides live input data for the model

These can be altered and adjusted as to accommodate the user's needs (eg. temperature and humidity sensor, speaker etc.)

The materials for the project are:

0.4 cm Plywood

0.5mm Plastic

0.5 cm Steel rods for structural integrity

These can also be adjusted and altered, depending on the available materials for the user. In this project, the Wood rectangles are laser cut, but they can also be cut with a saw and then polished with a file or sandpaper.


The following Bill Of Materials should be helpful for gathering the necessary components:

Gathering Dataset

Screenshot 2025-06-19 140811.png

The first part of this project is training the Yolo model. In the training process, the starting point is collecting the data. Depending of you use case, you can also train the model on your own custom dataset, but for my situation I did a mix of my own dataset and also some online resources.

I used Roboflow to gather, label and export the dataset in the required format. Their software helps with labeling data (has an autolabeler) but I did it manually, which was fast thanks to their interface.

Additionally, I used an official dataset to complete mine. This helps to make the model more robust and to mitigate the possible edge cases.


After the dataset was ready, I downloaded it in the adequate version:

You will now see that the dataset is already formatted and that it can be used to train the model. For safety, I also added this line in my data.yaml file to not get any "Missing file" errors


path: ../newdataset


Also, it is important to run the commands in a virtual environment set up with all the necessary libraries. Additionally, since the path is relative, the next commands should be run from the same folder as the dataset.

Training the Model

After the dataset is ready, I started training the model locally on my laptop. For this step, it's quite important that the machine used for training the model has a GPU. This way, the training time is greatly reduced.

I used the following script to train the model (make sure that ultralytics is installed in your venv) :


import os
os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'

#Open env
from ultralytics import YOLO

def train_model():
# Load YOLOv8 model
model = YOLO("yolov8n.pt")

# Train the model
results = model.train(
data="path_to_dataset",
epochs=50,
imgsz=640,
batch=16,
device='cuda', # Use GPU for training
name="gesture_recognition"
)
print(results)

if __name__ == '__main__':
train_model()


The parameters can be adjusted as needed but these specific parameters proved good enough for the project.


On a 5000 picture dataset, the model took around half an hour to train since it is a nano model. This is required because otherwise the Raspberry Pi cannot run larger models (L or even S) at reasonable FPS, which results in poor recognition performance and delay.

Testing the Model and Adjusting the Dataset

Now that the model is trained, I tested it to see it's performance in different backgrounds and with different lighting conditions. I have also found some edge cases that I later adjusted by adding more specific images to my dataset. I have used the following code to run camera interference with the Yolo model:


import cv2
from ultralytics import YOLO
import time

def test_model_on_camera():
# Load the model1 from runs directory
model_path = "path_to_your_model"
try:
model = YOLO(model_path)
print(f"Model loaded successfully from {model_path}")
except Exception as e:
print(f"Error loading model: {e}")
# Fallback to the base model if trained model isn't found
model = YOLO("yolov8n.pt")
print("Loaded base YOLOv8n model instead")

# Open default camera (built-in webcam)
cap = cv2.VideoCapture(1)
if not cap.isOpened():
print("Error: Could not open camera.")
return
print("Press 'q' to quit")
# For FPS calculation
prev_time = time.time()
while cap.isOpened():
success, frame = cap.read()
if not success:
print("Error: Failed to read from camera.")
break
# Perform inference
results = model(frame, conf=0.25)
# Calculate FPS
current_time = time.time()
fps = 1 / (current_time - prev_time)
prev_time = current_time
# Visualize results on the frame
annotated_frame = results[0].plot()
# Display FPS
cv2.putText(annotated_frame, f"FPS: {int(fps)}", (20, 30),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
# Show the frame
cv2.imshow("YOLO Detection", annotated_frame)
# Exit if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release resources
cap.release()
cv2.destroyAllWindows()

if __name__ == '__main__':
test_model_on_camera()


With this, several models can be tested and adjusted as needed.

Building the Main Module Circuit

main.jpg
snippert.jpg

The main module of the lamp is powered by a Raspberry PI 5 and has the following components:

Distance Sensor

OLED Screen

LED Ring

I used this pinout to make sure that I connect to the proper pins.

I have attached the Circuit Snippet needed to connect the Distance sensor and also the whole Circuit

Building the Secondary Modules Circuit

secondary.jpg

The circuit for the secondary modules are quite similar to the Raspberry Pi modules, but they lack the distance sensor (only one is needed since there is only one camera). Therefore, the needed components are:

ESP32 board

LED Ring

OLED screen

I have also used this pinout mapping for my ESP32 board. Each board model will have a certain mapping so make sure to find the one that you are using online.

This is the final circuit drawing:

Same thing as mentioned before, the components also nee to be connected to power and ground.

Test and Adjust Code

Once the circuits have been built, the final code can already be implemented and tested. I have attached my final versions of the code.

Build the Lamps and Have Fun!

You now have the freedom of designing your own lamps. Use whatever materials you want and also make it suitable for you use case. Also have some fun throughout the process!