SmartShot - Computer Vision Powered Basketball Hoop
by kilianraijmakers in Living > Education
30 Views, 0 Favorites, 0 Comments
SmartShot - Computer Vision Powered Basketball Hoop

Welcome to SmartShot, an AI-powered basketball shot classification system designed to bring real-time analytics and smart feedback straight to the court! With computer vision and a custom-trained detection model, SmartShot can automatically recognize and track key basketball shot outcomes, such as:
- Goals (successful shots)
- Rim hits
- Backboard hits
- Misses
Built with the power of YOLOv8, a cutting-edge object detection model, and trained on a tailored dataset of real basketball action, SmartShot uses a simple webcam and a lightweight on-screen interface to provide live performance tracking. Whether you're practicing alone, coaching a team, or just exploring the world of AI, SmartShot helps make training smarter, more interactive, and way more fun.
Originally created as part of an educational project, this system shows how accessible and practical artificial intelligence can be, even while playing sports!
Supplies
For connecting the Monitor to the Raspberry Pi, I used:
- Raspberry Pi 5 - 8GB
- Active Cooler for Raspberry Pi 5 - keeps the system cool during long use
- GeeekPi 7 Inch Capacitive Touchscreen Monitor - used as a compact visual display
- SANDISK Ultra MicroSD Card (32GB) - for storing the Pi OS and files
- USB and HDMI Cable
For building the basketball setup, I used:
- Yaheetech Basketball Hoop with Wheels - adjustable height with backboard and rim
- Spalding React FIBA TF 250 - the basketball used during testing
- Slangenklem - to attach the raspberry pi and monitor to the hoop
- Velcro strips - to secure components neatly
- Old plastic box (bandage/lunch box) - repurposed to house the 7 inch monitor
- Aluminium perforated sheet - for making the plastic box more stable
- Metal Pipe - mount for camera to create a top view of the basketball hoop
- Flamco Rail - for mounting the pipe to the backboard
- M6-, M4 Nuts, Bolts and Washers - to attach components neatly
All materials and components used for this project are listed in the attached PDF file. It includes every item used, from electronics and hardware with descriptions, quantities, and where they were sourced. Total cost was about €450, but it could be cheaper if you reuse parts!
Downloads
Create a Raspberry Pi Image

Use the Raspberry Pi Imager to flash the Raspberry Pi OS onto a microSD card.
- Download it from the official Raspberry Pi website.
- Choose Raspberry Pi OS
- Select your SD card and flash the image.
Once done, insert the card into your Raspberry Pi, power it on, and follow the on-screen instructions to complete the basic setup (like Wi-Fi, location, and password).
TIP! Watch the embedded video for a step-by-step walkthrough!
SSH Connection in Visual Studio Code

To control your Raspberry Pi from your laptop without using a separate monitor or keyboard, set up an SSH connection in Visual Studio Code using the Remote - SSH extension.
Steps:
- In VS Code, go to the Extensions tab and install Remote - SSH.
- Make sure SSH is enabled on your Raspberry Pi (sudo raspi-config ➜ Interface Options ➜ SSH ➜ Enable).
- Ensure your laptop and Pi are connected to the same network.
- Press F1 in VS Code → choose Remote-SSH: Connect to Host... → type pi@<your-pi-ip-address> (e.g., pi@192.168.0.123).
Once connected, you can write, edit, and run code remotely on your Pi, without needing to touch a mouse or keyboard on the Pi itself.
Search Pictures for the Dataset



To train a computer vision model, you’ll need a solid dataset. I started by collecting pictures of basketball-related objects and outcomes, including basketballs, rim hits, backboard hits, and goals.
Make sure your images are clear, varied, and relevant. The better your dataset, the more accurate your AI model will be. You can take photos yourself (recommended for realism) or find royalty-free images online from platforms like Unsplash or Pexels.
Gathering Edge Cases Data



After collecting standard images, I gathered edge case photos. These are tricky or unusual situations that could confuse the AI. Think: blurry shots, unusual angles, poor lighting, or objects that look like basketballs but aren’t (like yoga balls or orange objects).
These edge cases are super important. They train the model to be more robust and accurate in real-world conditions, where everything isn't always perfect. This extra step helps prevent false detections and improves reliability during live use.
Label All Gathered Images


Once I had collected enough images, I uploaded them to Roboflow, a tool that makes labeling fast and easy. For each image, I manually labeled the correct objects with their respective classes, such as basketball, goal, rim hit, or backboard hit.
This step is crucial because it teaches the AI what to look for in each frame. The more accurate and consistent your labels are, the better your model will perform later on.
Creating Dataset Version

Once I finished labeling all the images, I used Roboflow to generate a new dataset version. This process allowed me to apply data augmentations like flipping, rotation, and brightness changes to improve the model's robustness.
Downloading the Dataset

After creating the final version of my dataset in Roboflow, I downloaded it in the YOLOv8 format, which is compatible with the model I planned to use. It’s important to download the dataset as a .zip file and extract it into the correct project directory on your development machine, this ensures YOLO can find the data.yaml file and training images when you begin training in step 10.
Organizing Project Structure in Visual Studio Code

To keep everything clean and manageable, I created separate folders in Visual Studio Code. This makes it easier to find files, debug issues, and work more efficiently.
Here's the folder structure I used:
- dataset → for code related to downloading, training, evaluating, and tuning the YOLO model
- ai → for the YOLO-based webcam detection script
- raspberry_pi → for the code running on the Raspberry Pi that handles the external monitor display
Keeping everything well-structured not only saves time but also helps avoid confusion when switching between different parts of the project.
Making the Code to Train a Model With Python

Now that we’ve collected and labeled our data (see Step 8), it’s time to train a custom object detection model using YOLOv8. YOLOv8 is a powerful deep learning model that works well for real-time object detection tasks like detecting basketball goals, rim hits, and misses.
To train my model, I wrote a Python script using the Ultralytics YOLOv8 library. I started by importing YOLO and selected a pretrained model (yolov8s.pt) that I loaded onto the GPU using .to("cuda") to speed up training. I then trained the model with my Roboflow-exported dataset by pointing to the data.yaml file, specifying parameters like the number of epochs, imgsz (image size), and batch size. I also set the device=0 to use GPU and organized the results using the project and name fields. Once the training completes, the best-performing model is automatically saved to a folder in runs/detect/train/weights/best.pt. This script makes the training repeatable and customizable, and allows me to tweak parameters like batch size or epochs depending on how the model performs.
Training the Model

With the training script ready, I ran it to start the actual model training process. This step involves the model analyzing all the labeled images from my dataset and learning to recognize patterns like goals, rim hits, and more. Depending on the number of epochs and the size of the dataset, training can take a while, especially if you’re using a GPU, which speeds things up significantly. During training, the model outputs regular updates about its performance, such as accuracy and loss values, so I could monitor how well it was learning. Once training finished, the best version of the model was automatically saved to my project folder, ready to be used for real-time basketball detection.
Making Webcam Code for Realtime Detection



After training the model, it was time to put it into action using a webcam for real-time detection. I created a Python script that uses OpenCV to stream video frames from my webcam and pass each one to the YOLOv8 model I trained. The model detects outcomes like “goal,” “rim hit,” or “miss,” and I overlay those results on the screen using bounding boxes and labels.
To make the system more responsive and avoid counting the same action multiple times, I implemented a cooldown mechanism. For example, when a goal is detected, the system waits 2 seconds before it can count another goal. I also added logic to prevent false misses—for instance, if a basketball is seen but no goal follows, it counts as a miss only after a short timeout. Additionally, if a rim or backboard hit happens but isn’t followed by a goal, it’s automatically converted to a miss after 2 seconds. This logic helps ensure the counters reflect accurate, real-world outcomes and keeps the display system clean and reliable.
NOTE! This is a general explanation of how the code works. The most up-to-date and fully functional version can be found in the GitHub repository linked in the final step.
Installing the Monitor


For this step, I connected the monitor to the Raspberry Pi for the first time to test how everything looked and functioned visually. I powered everything up and made sure the display turned on correctly and filled the screen as expected. I checked the clarity, brightness, and positioning of the monitor to ensure it would be easy to read during live use. This test was mainly to confirm that the monitor setup was working reliably before moving on to integrating the real-time code and detection visuals.
Making Code for Monitor on Raspberry PI






In this step, I programmed the Raspberry Pi to run a fullscreen graphical interface that displays live basketball shot statistics. I used Python with the tkinter library to build the GUI, which shows a bar chart for goals, rim hits, backboard hits, misses, and total shots. Each bar updates in real time as new events come in. To make communication possible between the Raspberry Pi and my laptop, I included a small Flask server that listens for POST requests sent by the detection system running on my laptop. When a new event like a goal or miss is detected, it sends a request to the Pi, which updates the corresponding counter on the monitor.
I also added a feature to display the IP address of the Raspberry Pi at the top of the screen, making it easier to connect devices on the same network. Buttons were added to the GUI to reset all counters, stop the program, and even remotely start or stop the webcam code running on my laptop. This is done using requests.get() to trigger endpoints on the laptop’s Flask server. Additionally, I created a watcher_loop in a background thread to remove old labels after a certain timeout, keeping the state clean and responsive. This step was crucial in bringing everything together visually and letting me monitor the SmartShot system live on the Pi’s connected screen.
NOTE! This is the general logic behind the monitor control script. For the most recent and polished version, please check out the GitHub repository linked in the final step.
Building the Basketball Hoop


Before I could fully test the SmartShot system, I had to install the basketball hoop itself. This step involved securely mounting the hoop at the correct height and ensuring it was stable enough to handle repeated shots. Once the hoop was in place and everything was aligned properly, I was finally ready to move on to full system testing.
Test the Model



After setting up the monitor interface on the Raspberry Pi, I moved on to testing the entire SmartShot system in action. I placed the camera to face the basketball hoop and began shooting while monitoring the real-time feedback on the screen. The model successfully detected goals, rim hits, and backboard hits, and correctly triggered misses when no successful shot was detected. I paid close attention to how responsive the system was, making sure the cooldowns I had implemented worked properly, so that the same shot wouldn’t be counted multiple times. This step was essential to verify the accuracy and timing of the detection model, and to ensure that every event translated correctly to the live statistics shown on the Pi monitor. It was really satisfying to see everything working together, from the webcam detection to the real-time updates and interaction between devices.
Making Box for Monitor









To protect the monitor and mount it securely outdoors, I decided to repurpose an old first-aid box as the base. Here’s how I transformed it into a functional enclosure:
- Measuring and Marking Screw Holes
I started with a basic plastic first-aid box and placed the monitor inside to measure exactly where the screw holes needed to go. This ensured the monitor would fit snugly and stay firmly in place once mounted.
- Cutting a Support Plate
I cut a rectangular metal plate to match the outer dimensions of the box. Then I drilled matching screw holes into this plate. The plate adds extra support and distributes the mounting pressure evenly, preventing damage to the box or monitor.
- Creating Cable Access Opening
Next, I measured and marked where the power and HDMI cables would need to enter. Once marked, I carefully cut a groove/opening in the side of the box to route the cables neatly into the monitor.
- Making Mounting Slits for Velcro
To make the box mountable on the basketball hoop’s vertical pole, I added 4 slits in the back. These slits allow velcro straps to pass through and hold the box tightly against the pole.
- Sanding for Paint Prep
With all the functional cuts made, I sanded the outside of the box thoroughly. This helped the paint adhere better and removed the glossy finish from the plastic.
- Spray Painting the Box
I then spray painted the entire box black to match the rest of the SmartShot setup. I let it dry overnight to make sure the finish was smooth and fully set.
- Final Assembly
Once dry, I attached the metal support plate to the outside, threaded the velcro straps through the slits, and mounted the monitor securely inside using screws. The result was a mountable display case for my monitor.
Making Box for Raspberry PI



To protect and mount the Raspberry Pi securely to the pole, I used a plastic Raspberry Pi case and modified it for mounting. I started by grabbing a strong metal hose clamp that could wrap around a vertical pole. I then manoeuvred the clamp through the bottom of the Pi case.
Once the clamp was in position, I clicked the Raspberry Pi board into the case and snapped it shut. Since the setup would be mounted to the basketball hoop and exposed to movement, I reinforced the case by wrapping it with duct tape to ensure it stayed firmly closed and the Pi remained safely in place.
Making Pole for Mounting Camera





1. Bending the Metal Pipe
I started with a long metal pipe that would serve as the camera mount. To get the right viewing angle and distance from the backboard, I had to bend the pipe. The curve ensures that the camera sits far enough from the backboard to capture the hoop and surrounding area clearly.
2. Making Custom Bolts from Threaded Rod
The original bolts included with the basketball hoop were too short to support the added construction. To fix this, I cut a piece of threaded rod into two custom bolts of about 5 cm each. These provided the perfect length for securely attaching the mounting system.
3. Preparing the Flamco Rail
Luckily, I had a Flamco mounting rail that fit the width of my backboard perfectly, so I didn’t need to cut it. If you’re building this yourself, you may need to cut your rail to fit your backboard dimensions.
4. Mounting the Flamco Rail
I attached the Flamco rail firmly to the back of the backboard using the custom-cut threaded rods and nuts. This rail acts as the main support structure for the camera pipe and ensures stability during use.
5. Attaching the Pipe to the Rail
Finally, I drilled a hole through the bent metal pipe and mounted it securely to the Flamco rail. This created a strong, elevated arm to hold the camera in the correct position for basketball shot detection.
Mounting Camera to Pole




1. Drilling a Hole in the Webcam Mount
I began by drilling a hole into the base of the webcam mount so that I could screw it onto the support structure later. The hole needed to be centered and tight enough to keep the webcam stable once mounted.
2. Preparing the Mounting Pole
Before attaching the camera, I had to slightly pinch (flatten) the bent end of the metal pole. This gave me a flat surface to work with and allowed the camera mount to sit flush. Once it was flat enough, I drilled a matching hole into the pole so the screw could go through both the pole and the camera mount smoothly.
3. Aligning and Attaching the Camera
After both holes were ready, I aligned the webcam mount with the pole and inserted a screw through the drilled holes. I used a nut on the back side to fasten it tightly and securely.
4. Final Securing
With everything aligned, I held the webcam firmly in place and tightened the screw. The result was a stable, reliable camera setup facing the basketball hoop, ready for real-time AI detection.
Installing Everything on the Basketball Hoop






First, I mounted the pole to the backboard, which I already prepared in Step 18. Then I attached the Raspberry Pi case securely to the pole using the hose clamp. After that, I mounted the monitor box to the pole. With everything in place, I connected all the necessary cables between the monitor, Raspberry Pi, and camera. The full SmartShot setup was now installed.
Play Ball! 🏀




With everything set up and connected, it was finally time to shoot some hoops and enjoy the SmartShot system in action. The camera, model, and monitor all worked together to give real-time feedback. I hope you have just as much fun building and using your own version, enjoy your creation and have fun playing!
Github Link
You can find all the code and resources for this project on my GitHub repository.
Check it out here: GitHub - SmartShot Project