FormFix: AI Fitness Trainer

Hello everyone!
I’m Georgette Eriel Camua and I'm super excited to share my very first AI project as a first-year student in the Creative Technology & AI course at Howest University of Applied Sciences!
Meet FormFix — an AI-powered fitness trainer that gives real-time feedback on your plank and squat form using pose detection, LEDs, and even a buzzer and LCD display. This project was made for people who want to start going to the gym but feel unsure or shy about whether they’re doing exercises correctly. It's like having a gym buddy who tells you if you're going to break your back or not 😅.
FormFix detects if your plank or squat form is correct. It gives feedback using:
- LED lights (color-coded based on what you’re doing wrong)
- A buzzer if your form is straight-up bad
- A web interface to visualize your skeleton in real time
It can detect:
- If your back is curved
- If your hips are sagging
- If your squat is deep enough
Let me take you through how I built FormFix from the ground up!
Supplies
To build FormFix, I primarily used a laptop with a built-in webcam, which captures the user's movement during the workout. For the AI part, I trained a pose detection model using Python and MediaPipe.
To give real-time physical feedback, I used a Raspberry Pi along with the Freenove Project Kit, which includes components like LEDs (to show where the user’s form is off), a buzzer (to alert the user when their form is incorrect), and an LCD display.
For the physical setup, I laser-cut a simple 4mm multiplex sheet into a box to hold the webcam and some of the components. This helped keep everything organized and gave the project a polished look.
These were the core materials, but depending on your version of the project, you could easily simplify or upgrade parts based on what you have access to.
Attached here is my Bill Of Materials file, to see the total estimated price of the project.
Downloads
Capturing Data in Real Time

For this project, I didn’t need to manually annotate any data or train a custom model. Instead, I used MediaPipe Pose, a pre-trained pose estimation model that can detect 33 keypoints on the human body in real time using just a webcam.
Since my goal was to evaluate squat and plank form, I focused only on a few essential keypoints: the hip, shoulder, knee, and ankle. These gave me all the positional data I needed to calculate angles and analyze posture accurately.
By running MediaPipe live through my application, I could extract the coordinates of these keypoints from the video stream and use them to determine whether the user was performing the movement correctly—no dataset downloading, no annotation, just real-time keypoint extraction straight from the webcam feed!
What I’m Predicting


For squats, the model checks:
- If knees go past toes
- If the squat is low enough
- If the back is straight
For planks:
- If hips are sagging
- If elbows are misaligned
- If the back is not straight
I calculated angles (like back angle and knee angle)
Hardware Feedback System
Here’s where the Raspberry Pi and Freenove board come in.
- LED colors:
- Blue = squat not low enough
- Yellow = elbows or knees misaligned
- Red = sagging hips
- Purple = curved back
- LCD display: shows the percentage of correctness
- Buzzer: goes off when your form is trash 🙃
- Button: to start the exercise and choose between plank/squat mode
All the feedback is synchronized with the AI prediction from my model.
Visualizing the Pose Detection
On my laptop, I built a Streamlit web application to display the video stream and provide real-time feedback. In the app, I fetched frames from the stream, ran MediaPipe Pose on them to extract the keypoints (hip, knee, shoulder, ankle), and used that data to:
- calculate angles,
- check if the form is correct,
- and display color-coded feedback directly on the skeleton.
TLDR; The laptop detects the movement and calculates the angles, then sends feedback to the Raspberry Pi that activates the buzzers and LEDs!