AI Talking Dinosaur - Educational Toy From Recycled Materials♻️

by be_riddickulous in Circuits > Raspberry Pi

2190 Views, 14 Favorites, 0 Comments

AI Talking Dinosaur - Educational Toy From Recycled Materials♻️

IMG_4870.jpg
diagram.png
IMG_4917.jpg

I designed an AI-powered educational toy that makes learning colors and shapes fun, screen-free, and truly interactive. While it’s built for young children, adults can’t resist playing with it too! 🦖✨

Shaped like a friendly dinosaur with a movable mouth, the toy talks to the child, prompting them to find specific colors or shapes. When a piece is placed in its mouth, a built-in camera and AI model instantly recognize the object, give real-time voice feedback, and light up an RGB LED in response. Correct pieces are "swallowed" with encouragement, while incorrect ones are gently rejected. 🎤💡

It’s more than just a toy - it’s a playful way to introduce children to technology and support early development. Plus, it’s a great chance to repurpose recycled materials into something both adorable and meaningful. ♻️

I had so much fun making it, and I hope you will too!

Supplies

IMG_4879.jpg
719Ql0MaiVL._AC_SX679_.jpg
rpi5.jpg
istockphoto-687430286-612x612.jpg
61nHxQZGSTL._AC_SL1000_.jpg
Screenshot 2025-06-13 221004.png
5107PZk8jBL.jpg
3369-02.jpg

Hardware Components

  1. Raspberry Pi 5 – Runs logic for LED and servo motor control (older models also work)
  2. USB Webcam – Captures images of objects placed in the dinosaur’s mouth
  3. Servo Motor – Opens and closes the dinosaur’s mouth
  4. 8 RGB LED Module – Provides consistent lighting for image capture, changes color for feedback
  5. Small Speaker – Plays pre-recorded voice prompts and feedback
  6. Laptop – Hosts and runs the AI model, processes webcam input, and handles audio playback
  7. Physical Colored Pieces – Used for interaction and data collection
  8. Male-to-Female Jumper Cables – Connect LED and servo. If your cables are too short, you can extend the length by chaining two or three cables together.
  9. MicroSD Card – required to run the Pi

Crafting Materials

  1. Recycled Packaging Materials – Plastic bottles (1.5 L), plastic cups, thick paper, polystyrene pieces, 2 rubber bands
  2. Papier-Mâché – wrapping paper strips, PVA glue and water mixture
  3. Super-Fine Papier-Mâché Mix – Optional; can be bought or made for smoother finishing
  4. Base Layer for Paint – Gesso or similar primer
  5. Acrylic Paints – Various colors for finishing touches
  6. Paintbrushes – For applying glue and paint
  7. Craft Glue or Hot Glue Gun – For assembling parts
  8. Paper tape – For shaping and reinforcing the structure
  9. Scissors or Craft Knife – For cutting materials
  10. Bolt – Used to hinge the dinosaur’s jaw
  11. Wooden Shapes (Various Colors): Used as objects for play and AI detection
  12. Optional Decorations – Plastic eyes, eyelashes or other fun details to bring the toy to life

To make things easy, I’ve added a complete bill of materials with direct links to online stores - so you can start building right away!

Prepare Your Raspberry Pi

Screenshot 2025-06-14 092315.png
IMG_4906.jpg
640px-Raspberry-Pi-Pinout.webp.png

Setup Instructions for Raspberry Pi with Servo and LED

  1. Prepare your Raspberry Pi and hardware.
  2. Install Raspberry Pi OS
  3. Download and install the Raspberry Pi Imager on your computer.
  4. Use it to write a fresh Raspberry Pi OS image onto your microSD card. Note: This will erase all existing data on the card.
  5. Insert the microSD card into your Raspberry Pi and power it on.
  6. 👉 Need help with setup? Check out the Getting started - Raspberry Pi Documentation for step-by-step guidance.
  7. Connect your components
  8. Use jumper wires to connect the components to your Raspberry Pi.
  9. For the LED, I used a WS2812 SPI driver, which communicates with WS2812 LEDs over the SPI interface. This is a workaround since WS2812 LEDs typically require precise timing on a single GPIO pin. Using SPI offloads timing to the SPI peripheral, simplifying control.
  10. Connect the LED data line to physical pin 19 (GPIO 10).
  11. Connect the servo signal wire to physical pin 12 (GPIO 18).
  12. Use 5V power from pin 2 or 4, and ground from any of these pins: 6, 9, 14, 20, 25, 30, 34, or 39, for powering both the servo and the LED. ⚠️Double check the wiring - mixing up the ground and 5V connections can permanently damage your RasPi or/and the connected components.
  13. Disable I2C (optional)
  14. To avoid pin conflicts and save system resources, disable I2C via raspi-config under Interface Options.
  15. Test your setup
  16. Run your test code (for example, the code from step 3).
  17. The servo arm should move, and the LED should change colors if everything is connected and configured correctly.

Build the Head of the Toy Dinosaur

IMG_4756.jpg
IMG_4758(1).jpg
IMG_4760.jpg
IMG_4763(1).jpg
IMG_4771.jpeg
IMG_4780.jpg

Now it’s time to build the head of the toy dinosaur. Doing this at an early stage is important because it allows you to collect training data in a controlled environment. By capturing images inside the toy’s mouth - with an LED installed - you ensure consistent lighting and background. This helps your AI model learn more accurately, and reduces the need for a large dataset or extensive image augmentation.

🛠️ Assembly Instructions:

  1. Carve a plastic bottle lengthwise to form the two parts of the mouth: the bottom and top jaws.
  2. Attach a plastic cup to the bottle to serve as the neck (see the photo). This will later connect the head to the body.
  3. Use additional bottle parts to shape the top and back of the head, giving it structure and volume.
  4. Use tape to hold the parts temporarily in place and test the alignment.
  5. Once you're satisfied with the shape, secure all parts with a hot glue gun.

🎨 Papier-Mâché Covering:

  1. Start covering the plastic frame with papier-mâché using medium-sized strips of packaging paper.
  2. Mix 50% PVA glue and 50% water, and apply it to both sides of each strip using a brush.
  3. Apply the strips over the plastic and smooth them. You'll need about 5 layers if you want to remove the plastic base later.
  4. Let it dry completely.

👉Tip: Keep the plastic base for the bottom jaw - it’s smoother than paper and makes it easier for pieces to slide into the toy’s body.

⚙️ Add the Mouth Mechanism:

  1. Position the bottom jaw inside the upper jaw and temporarily secure it with paper tape.
  2. Use your camera to test the view:
  3. Place a colorful playing piece inside the mouth.
  4. The camera should clearly see the entire surface of the bottom jaw. You can adjust the position of both the camera and the jaw to achieve the perfect view.
  5. Avoid angles where the camera sees past the jaw or inside the toy’s body.
  6. Once the view is correct, mark the hinge point positions on both sides.
  7. Remove the bottom jaw and:
  8. Create an opening in the plastic to insert the servo motor (I used a heated metal skewer to melt a hole).
  9. Glue the servo securely into this opening.
  10. Attach the servo arm, then:
  11. Reinsert the bottom jaw.
  12. Mark where the servo arm will meet the inner wall of the upper jaw.
  13. Remove the arm again and glue it to the marked position inside the upper jaw.
  14. On the opposite side, drill a hole through both jaws to act as a hinge.
  15. Insert a bolt to create the jaw’s pivot point so the bottom jaw can move freely.

Test the Jaw Mechanism

Once everything is assembled, it’s time to test the servo motor and make sure the mouth moves as intended.

  1. Power up the servo and observe its range of motion.
  2. Ensure the mouth opens wide enough for a child to easily insert pieces.
  3. When the servo closes the mouth, it should do so at an angle that gently pushes the piece downward, allowing it to slide into the toy’s belly.
  4. If needed, adjust the servo arm position or hinge alignment to improve the motion.

Getting this right now will ensure smooth gameplay later.

You can use this Python code:

from rpi5_ws2812.ws2812 import Color, WS2812SpiDriver
import time
from RPi import GPIO

servo_pin = 18

GPIO.setmode(GPIO.BCM)
GPIO.setup(servo_pin, GPIO.OUT)

servo_pwm = GPIO.PWM(servo_pin, 50)
servo_pwm.start(0)

if __name__ == "__main__":
try:
# Initialize the WS2812 strip with 8 LEDs and SPI bus 1, CE0
strip = WS2812SpiDriver(spi_bus=1, spi_device=0, led_count=8).get_strip()

while True:
strip.set_all_pixels(Color(255, 0, 0))
strip.show()
servo_pwm.ChangeDutyCycle(8) #adjust as needed for the servo motion
time.sleep(2)
strip.set_all_pixels(Color(0, 255, 0))
strip.show()
servo_pwm.ChangeDutyCycle(6) #adjust as needed for the servo motion
time.sleep(2)

except KeyboardInterrupt:
print("Program interrupted by user.")

finally:
# Stop PWM and clean up GPIO
servo_pwm.stop()
GPIO.cleanup()
print("GPIO and PWM cleaned up.")

Install the LED and Camera Inside the Mouth

IMG_4779.jpg
5107PZk8jBL.jpg
Screenshot 2025-06-13 221004.png

Now that the jaw mechanism works, it’s time to install the LED and webcam inside the toy’s mouth. These components are essential for both gameplay and collecting high-quality training images for your AI model. I removed the bottom jaw temporarily for easier access.

🧰 What I Used:

  1. An 8 LED module for consistent lighting
  2. A USB webcam or camera module for image capture
  3. Polystyrene pieces to create mounting cushions
  4. Elastic bands to hold the components in place
  5. Hot glue
  6. Paper tape

🛠️ Assembly Instructions:

  1. Create cushioned mounts: Cut small pieces of polystyrene to serve as cushioned bases for the camera and LED.
  2. Attach cushions to the upper jaw: Wrap elastic bands around the polystyrene, then hot-glue the cushions to the ceiling of the upper jaw.
  3. Position the camera: Carefully insert the camera module under the elastics. Take your time here - positioning is critical. You want a clear view of the pieces being inserted, and the camera must not be obstructed by the moving jaw or servo. This may require some trial and error and precise measuring. 👉Tip: you might want to use a camera with a smaller field of view (50 degrees).
  4. Insert the LED: Place the LED module next to the camera, also held by the elastic band. It should illuminate the area evenly without creating shadows.
  5. Secure the wires: Use paper tape to neatly attach all wiring along the back inner wall of the head. This keeps wires safely out of the way of moving parts and the sliding pieces.

Collect the Dataset for AI Training

IMG_4774(1).jpg

Now that the camera and lighting are in place, it’s time to build your dataset by capturing images of the toy pieces inside the mouth. This data will be used to train the AI model to recognize shapes and colors accurately. 🚀If your colorful pieces look similar to mine, you can also download a ready-made model I’ve trained (see the last step for the link).

🧩 What to Do:

  1. Insert one piece at a time into the toy’s mouth.
  2. Capture images using Python (see sample code below).
  3. Vary the angle and position of each piece to add diversity.
  4. Take around 20 photos per piece.

For example, I used:

  1. 5 shapes × 5 colors = 25 unique pieces
  2. 20 images per piece = 500 total

👉 Make sure to collect an equal number of images for each piece to avoid class imbalance, which can reduce your model’s accuracy.

➕ Additional Images to Include:

  1. Multiple pieces at once – This helps the model learn to reject invalid inputs (e.g., when a child inserts two or more pieces at the same time).
  2. Empty background – Use these as negative examples to teach the model what “no piece” looks like.
  3. Other objects – Include items like pencils, fingers, or wrappers etc. to help the model ignore irrelevant items.

In total, I collected just over 800 images.

This is the code you can run to take photos:

import cv2
import os


# Create output folder if it doesn't exist
output_folder = "captured_images"
os.makedirs(output_folder, exist_ok=True)

# Open webcam (1 = for usb web-cam)
cap = cv2.VideoCapture(1)

if not cap.isOpened():
print("Error: Cannot open webcam")
exit()

img_counter = 0

print("Press SPACE to capture, ESC to quit.")

while True:
ret, frame = cap.read()
if not ret:
print("Failed to grab frame")
break

cv2.imshow("Webcam", frame)

key = cv2.waitKey(1)
if key % 256 == 27: # ESC key
print("Exiting...")
break
elif key % 256 == 32: # Spacebar
img_name = f"{output_folder}/image_{img_counter:02d}.jpg"
cv2.imwrite(img_name, frame)
print(f"Saved: {img_name}")
img_counter += 1

cap.release()
cv2.destroyAllWindows()

Annotate Your Dataset for Shapes

Screenshot 2025-06-13 193916.png
Screenshot 2025-06-13 194028.png
Screenshot 2025-06-13 170651.png

Once you've collected your images, the next step is to upload and label them using Roboflow, a platform that makes it easy to annotate data and train AI models.

📝 Get Started:

  1. Create a Roboflow Account
  2. Go to roboflow.com, sign up (if you haven’t already), and log in.
  3. Create a New Project
  4. Click “Create New Project”
  5. Choose Instance Segmentation as the annotation type (this lets you outline shapes precisely).
  6. Name your project (e.g., "DinoToy_Shapes").
  7. Upload Your Images
  8. Upload all the photos you captured during the previous step.
  9. Define Your Classes
  10. For the shape recognition model, I defined these classes:
  11. square
  12. circle
  13. triangle
  14. pentagon
  15. star
  16. invalid (for multiple pieces inserted at once)
  17. Start Annotating
  18. Draw a segmentation mask around each visible shape.
  19. Assign it to the appropriate class.

🛑 Special Cases:

  1. For images with multiple pieces inserted together, outline all pieces as a single object, and label it as invalid.
  2. For images of background only or unrelated objects (like fingers, pencils, wrappers), use the "Mark Null" button. These will serve as negative examples during training.

This structured annotation process is crucial to help your AI model distinguish valid input from invalid or noisy data.

Create and Annotate the Color Dataset

Screenshot 2025-06-13 194314.png
Screenshot 2025-06-16 105952.png

After you’ve finished annotating shapes, you can use the same image dataset to train a second model for color recognition.

🔁 Duplicate Your Project:

  1. In Roboflow, go to your shape project.
  2. Click the three-dot menu next to the project name and choose “Duplicate Project”.
  3. Name the new project something like daino-colors.

🏷️ Update the Labels for Colors:

  1. Open each image in the new color project.
  2. Change the class names from shape labels to color labels, for example:
  3. square → yellow
  4. circle → red
  5. triangle → green
  6. and so on, depending on the actual color of the piece in that image.
  7. Leave images with multiple pieces labeled as invalid.

This method lets you reuse the same dataset while training two separate models - one to recognize shape, the other color - using consistent visual data.

Prepare the Datasets for Training

Screenshot 2025-06-13 194610.png

🧠 Prepare the Dataset for Training

Before exporting your dataset, you'll need to generate training images with augmentations to improve your model’s performance and reduce overfitting.

⚙️ On Roboflow:

  1. Go to the “Generate” tab in your new project.
  2. Choose the desired image size (I usesd 640×640) for YOLO training.
  3. Add augmentations like:
  4. Rotation (e.g., ±15°)
  5. Blur (slight)
  6. Brightness/Contrast adjustments (you can see the settings I used in the photo)
  7. Choose how many augmented copies you want per image (you can do max x3 on a free account).
  8. Click “Generate Dataset”.

Train the AI Model

Screenshot 2025-06-13 194522.png

Once your dataset is ready, it's time to train your model. You can do this either directly on Roboflow or locally on your computer (best if you have a dedicated GPU, offers better performance and control).

☁️ Option 1: Train on Roboflow (Quick & Cloud-Based)

Roboflow allows you to train directly in the browser. The benefit is you can turn off your computer while the training is happening.

💻 Option 2: Train Locally (Advanced + Flexible)

  1. Download Your Dataset
  2. In your Roboflow project, choose “Export Dataset”
  3. Select YOLOv11 format
  4. Download and unzip it on your computer
  5. Prepare Your Environment
  6. Create a folder for your project (e.g., ~/dino-yolo)
  7. Inside, create a Python file like train_shapes.py
  8. Install Required Packages: Make sure Python is installed, then install:
pip install ultralytics
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 # Adjust for your CUDA version
  1. Write Your Training Script

Below is an example (you’ll customize paths as needed):

from ultralytics import YOLO

model = YOLO("yolov11n.pt")

model.train(
data="path/to/dataset/data.yaml", # Path to Roboflow-exported YAML
epochs=100,
patience=20,
imgsz=640,
batch=8, # Set your desired batch size here
device=0 # 0 for GPU, -1 for CPU

Tips

  1. The dataset is small, so increase the number of epochs (e.g., 100+) to allow the model time to converge.
  2. You can try larger models like yolov11m.pt for higher accuracy (but they require more memory and training time).
  3. Make sure the path to your data.yaml file is correct.

Monitor Training

  1. You’ll see progress in the terminal: loss values, precision, recall, etc.
  2. You can build or paint the rest of your toy while it trains!

Evaluate Performance

  1. After training, check the confusion matrix and mAP values.
  2. If the model struggles with certain classes, consider collecting more images or improving the annotations, using a bigger model, adjusting some parameters, adding different augmentations, etc.

Retrieve Your Trained Model

  1. Your trained model will be saved as best.pt (depending on export settings).
  2. Repeat the training process for your second model (colors or shapes, depending on what you did first).

Build and Finish the Dinosaur’s Body

IMG_4865.jpg
IMG_4770(1).jpg
IMG_4805.jpg
IMG_4839.jpg
IMG_4854(1).jpg
IMG_4855.jpg
IMG_4862.jpg

Now that the head is working and the electronics are tested, it's time to create the body of the dinosaur!

🛠️ Forming the Body Shape

  1. I used a gift package shaped like an Easter egg, but you can also use a balloon as a mold.
  2. Cover it with about 5 layers of papier-mâché. Let each layer dry before adding the next.
  3. Once fully dry, cut two circular openings:
  4. One at the top to attach the head
  5. One at the bottom to retrieve the inserted pieces
  6. Cut a small half-circle at the bottom to route cables safely into the body.

🦖 Adding Details

  1. Spikes: Cut strips from thick paper, roll them up into cones and glue them to the top of the head and along the back.
  2. Make teeth from the same paper and glue them to the upper jaw.
  3. Legs & Tail Base: Cut rough shapes for legs and a tail base out of thick cardboard, then glue them to the bottom of the body.
  4. You can use foil or polystyrene to add volume where needed (e.g., for shaping the head, legs, or tail).
  5. Secure the material with hot glue, then cover with paper strips.

✨ Smoothing and Finishing

  1. Apply a layer of super-fine papier-mâché to smooth the surface once the base structure is dry and firm.
  2. Use this layer to shape small details like nostrils, toes, and arms.
  3. Attach googly eyes and sculpt eyelids around them with the same fine material.

Painting and Sealing

1000051254.jpg
IMG_4876.jpg
IMG_4878.jpg


  1. Once everything is fully dry, apply a base coat or primer for acrylic paint. This helps prevent the paint from soaking in unevenly.
  2. After the base coat dries, you can use 120-grit sandpaper to smooth out any bumps.
  3. Paint your dinosaur with your chosen acrylic colors. Let the first layer dry, then apply a second coat (I applied 3 layers).
  4. Once the paint is completely dry and you've achieved the desired coverage, you can apply a varnish to protect the surface and give your toy a polished finish. This is optional.

Create and Add Voice Lines

Screenshot 2025-06-13 200645.png

Now that your dinosaur is fully built and painted, let’s give it a voice!

✍️ Writing Your Lines

Start by making a list of all the phrases your dinosaur will say. These could include:

  1. “Great job!”
  2. “Try again!”
  3. “Put the red shape here.”
  4. “Let’s play a game!”
  5. Write everything down so you’re ready to generate the files.

🗣️ Using ElevenLabs

Go to https://elevenlabs.io and try out their text-to-speech tool.

  1. You can create some voice files for free, but note there’s a limit - after that, you’ll need to purchase a one-month subscription.
  2. Choose a voice you like. Try a few different ones to see what fits your toy best!
  3. Paste your phrases one by one and generate the audio.
  4. Download the audio files and save them in your project folder (e.g., in a subfolder called audio).

🐍 Playing Audio in Python

To play the voice lines, use the pygame module in Python.

import pygame
pygame.mixer.init()
pygame.mixer.music.load("audio/hello.mp3")
pygame.mixer.music.play()

🔊 Place the speaker in the belly of the dinosaur.

Writing the Game Code

Now it’s time to add the logic that brings everything together.

  1. You can write your own game rules based on how you want the dinosaur to respond.
  2. Or, check out my code on GitHub for inspiration or reuse:
  3. 🔗 github.com/howest-mct/2024-2025-projectone-ctai-RiddickNataliia

🧪 Final Testing

Try out the toy yourself to make sure everything works smoothly - voice, lights, movement, and recognition. Once you're happy, invite children to test it out.

🎉 Great Job!

You’ve built an interactive AI dinosaur toy. Time to celebrate!

If you make this project, I would love to see how it turned out, so please reach out 🧡