EcoSort

by parkbpx in Circuits > Raspberry Pi

57 Views, 0 Favorites, 0 Comments

EcoSort

EcoSort

In today’s world, proper waste management is more important than ever. What if there was a way to know how to sort trash easier, faster, and more efficient? Use EcoSort, a robot designed to help categorize and sort your waste automatically. Using a Momento camera and AI, EcoSort captures images of trash inside a box, identifies what type of waste it is—whether it's Landfill, Compost, Hazard, Plastic, Metal, Paper, Glass, or Organic—and categorizes it accordingly.

Once the trash is sorted, EcoSort sends the data through MQTT to a Pico W board, which triggers corresponding lights and audio, informing you of the trash type. This robot makes sorting waste not only smarter but also more interactive. Follow along to learn how to build your own!

Supplies

Lasercut and Assemble Boxes

Screenshot 2024-12-10 at 1.16.44 PM.png
Screenshot 2024-12-10 at 1.17.31 PM.png
Screenshot 2024-12-10 at 1.23.13 PM.png
Screenshot 2024-12-10 at 1.17.14 PM.png

To start this project, download the files attached below, which include everything you need for assembly.

  1. Prepare the Wooden Base
  2. The wooden box serves as the base of the project. Start by assembling this wooden box—this will support the acrylic box that will be attached later.
  3. Drill a Hole for Wires
  4. On one side of the wooden box, drill a hole. This will allow you to pass the wires from the light strips through the box, ensuring a clean and organized setup.
  5. Assemble the Acrylic Box
  6. The acrylic box will fit inside the wooden box. The gap between the wooden and acrylic boxes is important as it will accommodate the light strips later in the project. Ensure that there is enough space to fit the lights comfortably.
  7. Insert the Acrylic Box
  8. Once the hole is drilled, place the acrylic box inside the wooden box. You can secure the acrylic box in place using a hot glue gun to hold everything together.

With the base structure ready, you’re now prepared to proceed with the next steps of the project!

3D Printouts

Screenshot 2024-12-10 at 1.34.16 PM.png
Screenshot 2024-12-10 at 1.34.27 PM.png
Screenshot 2024-12-10 at 1.46.33 PM.png
Screenshot 2024-12-10 at 1.33.53 PM.png

Now it's time to do some 3D printing.

  1. Download the STL Files
  2. First, download the STL files attached below for the parts you'll need to print. The two parts here to hold our project is the yellow camera holder, which will hold your Momento camera securely in place.
  3. Print the Yellow Camera Holder
  4. Using your 3D printer, print the yellow camera holder. This part will be essential for holding up the camera.
  5. The holder is 4mm bigger than the camera height so that it could fit in easily with the velcro stickers.
  6. Assemble the Camera Holder
  7. Once printed, assemble the camera holder by using superglue or a hot glue gun to secure the pieces together. This will create a stable base for the camera.
  8. Add two small velcro stickers on the inside the camera holder.
  9. Print the Black Printout
  10. The black printout should be placed at the front of the Momento camera. This is designed to perfectly fit the camera and the holes.

Once your camera holder is assembled and the black printout is attached, you’re ready to proceed with integrating the camera into the robot’s system!

Downloads

Assemble Camera

Screenshot 2024-12-10 at 1.48.24 PM.png
Screenshot 2024-12-10 at 1.48.37 PM.png

Now that you’ve printed and assembled the camera holder, it’s time to put together the Momento camera. Follow the steps below to ensure everything is connected and ready for action:

  1. Insert the SD Card
  2. Begin by inserting the SD card into the camera.
  3. Connect the Slots for SD Card to Work
  4. To ensure the SD card works, you need to connect the two slots right next to it using a plug-plug cable. If you don’t have a plug-plug cable, you can use two plug-alligator cables. Clip and wrap them with copper tape for added security and a more stable connection.
  5. This is the most important part as without the connection to the SD card, there will be no where or way to process retrieve and process images.
  6. Add the Battery
  7. Insert the battery into the designated slot to power the camera.
  8. Connect to Your Laptop
  9. Use a USB-C cable to connect the camera to your laptop. This will allow you to start coding and communicate with the camera for setup.
  10. Attach the Black 3D Prints
  11. Use four M3x20 nuts and bolts to attach the black 3D prints to the camera. These parts will ensure the camera is secured properly within the holder.
  12. Attach Velcro to Secure the Camera
  13. Attach two small velcro strips to the middle bottom of both the front and back of the camera. These will help secure the camera to the camera holder, allowing it to stay in place during operation.

Additional Step: attach the camera holder with the camera (the velcros should help)!

With the camera assembled, you're ready to continue integrating it into your robot’s system!

Code the Camera

You can copy the code below into your code.py. However, here's some important codes to understand!

  1. Imports and Setups
import os, time, ssl, binascii, wifi, vectorio, socketpool, adafruit_requests, displayio
from jpegio import JpegDecoder
from adafruit_display_text import label, wrap_text_to_lines
import terminalio
import adafruit_pycamera
import adafruit_minimqtt.adafruit_minimqtt as MQTT
  1. Importing libraries for file I/O, networking, image handling, display output, MQTT communication, and camera control.
  2. Configuration and Initialization
text_scale = 2
trash = ("Can you identify what the object is and then classify what type of trash it is "
"(Organic, Glass, Paper, Metal, Plastic, Hazard, Compost, Landfill)? "
"Follow this template: The object appears to be ____. This type of trash for this item is ___.")
prompts = [trash]
num_prompts = len(prompts)
prompt_index = 0
prompt_labels = ["CLASSIFY TRASH"]
  1. text_scale: Scale for the text displayed on the screen.
  2. trash: A template string to ask OpenAI’s model to classify trash types.
  3. prompts: A list of classification prompts for OpenAI. In this case, there's only one prompt.
  4. num_prompts, prompt_index, prompt_labels: Variables to manage the prompt system for cycling through different classification tasks.
  5. Image Encoding Function
def encode_image(image_path):
with open(image_path, 'rb') as image_file:
image_data = image_file.read()
base64_encoded_data = binascii.b2a_base64(image_data).decode('utf-8').rstrip()
return base64_encoded_data
  1. This function reads an image from a file path and converts it into base64 encoding, which is required for sending images through OpenAI's API.
  2. Text Display on Screen
def view_text(the_text):
rectangle = vectorio.Rectangle(pixel_shader=palette, width=240, height=240, x=0, y=0)
pycam.splash.append(rectangle)
the_text = "\n".join(wrap_text_to_lines(the_text, 20))
if prompt_index == 1:
the_text = the_text.replace("*", "\n")
text_area = label.Label(terminalio.FONT, text=the_text,
color=0xFFFFFF, x=2, y=10, scale=text_scale)
pycam.splash.append(text_area)
pycam.display.refresh()
  1. Displays text on the screen by creating a rectangular background and wrapping the text to fit within the display area. It also adjusts the formatting for different prompt types.
  2. Image Classification via OpenAI API
def send_img(img, prompt):
base64_image = encode_image(img)
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer API-KEY"
}
payload = {
"model": "gpt-4-turbo",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": f"{prompts[prompt_index]}"
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}
]
}
],
"max_tokens": 300
}
response = requests.post("https://api.openai.com/v1/chat/completions",
headers=headers, json=payload)
json_openai = response.json()
print(response.json())
TEXT = json_openai['choices'][0]['message']['content']

# part 6 [explained later]
  1. Sends the image (encoded in base64) to OpenAI's API with the appropriate prompt. It then processes the returned classification result.
  2. Replace the API-KEY with your actual OpenAI API key. If you don't have that, you need to make an account in OpenAI and create a new secret key.
  3. The returned classification text is used to determine the type of trash and sends messages accordingly.
alt_text_file = img.replace('jpg', 'txt')
alt_text_file = alt_text_file[:11] + f"_{prompt_labels[prompt_index]}" + alt_text_file[11:]
if prompt_index == 5:
alt_text_file = alt_text_file.replace("?", "")
with open(alt_text_file, "a") as fp:
fp.write(json_openai['choices'][0]['message']['content'])
fp.flush()
time.sleep(1)
fp.close()
  1. This section saves the text result from OpenAI into a text file, using the same filename as the image but with a .txt extension. The results are appended to the file.
  2. Sending Classification Results via MQTT
if "Organic" in TEXT:
mqtt_client.publish(light_sound_feed, "organic")
if "Metal" in TEXT:
mqtt_client.publish(light_sound_feed, "metal")
if "Glass" in TEXT:
mqtt_client.publish(light_sound_feed, "glass")
if "Paper" in TEXT:
mqtt_client.publish(light_sound_feed, "paper")
if "Plastic" in TEXT:
mqtt_client.publish(light_sound_feed, "plastic")
if "Hazard" in TEXT:
mqtt_client.publish(light_sound_feed, "hazard")
if "Landfill" in TEXT:
mqtt_client.publish(light_sound_feed, "landfill")
if "Compost" in TEXT:
mqtt_client.publish(light_sound_feed, "compost")
  1. Based on the classification result (TEXT), this block of code sends the type of trash to the MQTT broker. Each category (e.g., "Organic", "Plastic") has its own message published to the light_sound_feed feed.
  2. Loading and Displaying Images
def load_image(bit, file):
bit.fill(0b00000_000000_00000) # fill with a middle grey
decoder.open(file)
decoder.decode(bit, scale=0, x=0, y=0)
pycam.blit(bit, y_offset=32)
pycam.display.refresh()

# part 8 [explained later]

palette = displayio.Palette(1)
palette[0] = 0x000000
decoder = JpegDecoder()
bitmap = displayio.Bitmap(240, 176, 65535)
  1. WiFi & MQTT Connection
print("Connecting to WiFi")
try:
wifi.radio.connect(WIFI_SSID, WIFI_PASSWORD)
print(f"Connected to Wi-Fi! IP address: {wifi.radio.ipv4_address}")
except ConnectionError as e:
print(f"Connection failed: {e}")
  1. Enter your own wifi ssid and password in the wifi.radio.connnect() in order for your camera to connect to OpenAI and process images.
mqtt_client = MQTT.MQTT(
broker="io.adafruit.com",
port=1883,
username="parkbpx",
password="aio_oXHQ14VNN4srK7hgTQI3dGbSoBgI",
socket_pool=pool,
ssl_context=ssl.create_default_context(),
)

def connected(client, userdata, flags, rc):
# Connected to broker at adafruit io
print("Connected to Adafruit IO! Listening for topic changes in feeds I've subscribed to")
# Subscribe to all changes on the feed.
client.subscribe(light_sound_feed)

def disconnected(client, userdata, rc):
# Disconnected from the broker at adafruit io
print("Disconnected from Adafruit IO!")

def message(client, topic, message):
print(f"Received message on topic {topic}: {message}")

light_sound_feed = "parkbpx/feeds/light_sound_feed"

mqtt_client.on_connect = connected
mqtt_client.on_disconnect = disconnected
mqtt_client.on_message = message

mqtt_client.connect()
  1. Configures the MQTT client to connect to Adafruit IO. It subscribes to feeds for interaction with the system.
  2. light_sound_feed: the feed we're controlling and sending the data over to.
  3. Main Loop (can see more in the actual code)
while True:
# Various actions like capturing images, navigating prompts, etc.
# Logic for capturing images, handling button presses, displaying messages
mqtt_client.loop(2)
  1. The main loop handles capturing images, displaying messages, handling button presses (for cycling through prompts), and sending requests to OpenAI. It also manages interactions with MQTT feeds.

The code provides a full flow for interacting with a camera, sending images to OpenAI for classification, displaying results on a screen, and sending messages via MQTT based on those results. Let me know if you want to dive deeper into any specific section!

Downloads

Adafruit IO Dashboard

Screenshot 2024-12-10 at 2.22.13 PM.png

To publish data and retrieve it, you’ll create an Adafruit IO dashboard with momentary buttons. Follow these steps to set it up:

1. Sign in to Adafruit IO

  1. Go to adafruit.com and log in to your account.
  2. If you don’t have an account, create one and sign in.

2. Access Your Dashboard

  1. Navigate to the IO section in the site header.
  2. Click Dashboards and then open Dashboard Settings (on the right side).
  3. Select Create New Block to start adding controls.

3. Create Movement Buttons

  1. Select Block Type: Choose Momentary Button as the block type.
  2. Enter Feed Name: Create a new feed called move_feed.
  3. Set Button Titles and Values:
  4. Add buttons with the titles: Organic, Glass, Paper, Landfill, Metal, Plastic, Hazard, Compost
  5. Change the colors accordingly. The colors are based on the universal trash bin system; however, you may want to change it up a little.
  6. For each button:
  7. Set the Button Text and Press Value to match the title (e.g., "organic," "glass").
  8. Ensure the Press Value is written in lowercase to align with the code.

This is the IO dashboard for reference: https://io.adafruit.com/parkbpx/dashboards/camera-control.

Code the Actions (receiver)

Now, we need to input the code into the receiver, or the pico w, which is going to be responsible for lighting up the box and playing sounds.

  1. Configure the settings.toml File
  2. Edit the file to include your Wi-Fi credentials and Adafruit IO settings. Copy and paste the following template, replacing the placeholders with your actual details:
WIFI_SSID = "PLACE THE NAME OF YOUR WIFI"
WIFI_PASSWORD = "PLACE THE PASSWORD OF YOUR WIFI"
AIO_USERNAME = "PLACE YOUR AIO USERNAME"
AIO_KEY = "PLACE YOUR AIO KEY"
BROKER = "io.adafruit.com"
PORT = 1883
  1. Replace PLACE THE NAME OF YOUR WIFI with your Wi-Fi network name (SSID).
  2. Replace PLACE THE PASSWORD OF YOUR WIFI with your Wi-Fi password.
  3. Replace PLACE YOUR AIO USERNAME and PLACE YOUR AIO KEY with the credentials from the yellow key icon on the dashboard.
  4. Code Import and Setups
import board, time, pwmio, neopixel
import os, ssl, socketpool, wifi
import adafruit_minimqtt.adafruit_minimqtt as MQTT
from audiopwmio import PWMAudioOut as AudioOut
from audiomp3 import MP3Decoder
from adafruit_led_animation.color import RED, YELLOW, ORANGE, GREEN, TEAL, CYAN, BLUE, PURPLE, MAGENTA, \
GOLD, PINK, AQUA, JADE, AMBER, OLD_LACE, WHITE, BLACK

  1. Import necessary libraries for hardware control (LEDs, audio), network communication (Wi-Fi, MQTT), and file handling.
  2. Hardware Initialization
strip = neopixel.NeoPixel(board.GP15, 30, brightness=1.0)

audio = AudioOut(board.GP16)

path = "sounds/"

filename = "organic.mp3" # change to valid file in your path
mp3_file = open(path + filename, "rb")
decoder = MP3Decoder(mp3_file)
  1. NeoPixel Strip:
  2. Controlled via GPIO pin GP15, with 30 LEDs.
  3. Brightness is initially set to maximum (1.0).
  4. Audio Output:
  5. Audio output is set up on GPIO pin GP16.
  6. Path: Directory for MP3 files. Download the sounds.zip and drag it into your CIRCUITPYTHON to play the sounds.
  7. Play Sound and Lights
def activate(filename, color, brightness):
strip.fill(color)
strip.brightness = brightness
decoder.file = open(path + filename, "rb")
audio.play(decoder)
while audio.playing:
pass
strip.fill(0)
  1. Set the LED strip to the specified color and brightness.
  2. Open the MP3 file and play it using the audio object.
  3. Wait until the audio finishes playing before turning off the LED strip.
  4. MQTT Callback Functions
# Get adafruit io username and key from settings.toml
aio_username = os.getenv('AIO_USERNAME')
aio_key = os.getenv('AIO_KEY')

# Setup a feed: This may have a different name than your Dashboard
light_sound_feed = aio_username + "/feeds/light_sound_feed"

# Setup functions to respond to MQTT events
def connected(client, userdata, flags, rc):
# Connected to broker at adafruit io
print("Connected to Adafruit IO! Listening for topic changes in feeds I've subscribed to")
# Subscribe to all changes on the feed.
client.subscribe(light_sound_feed)

def disconnected(client, userdata, rc):
# Disconnected from the broker at adafruit io
print("Disconnected from Adafruit IO!")
  1. Fetches Adafruit IO username and key from environment variables in settings.toml using os.getenv.
  2. Constructs the MQTT feed name light_sound_feed.
  3. Connected: Subscribes to the light_sound_feed upon connecting to Adafruit IO.
  4. Disconnected: Logs a message when disconnected.
  5. Messages
def message(client, topic, message):
# The bulk of your code to respond to MQTT will be here, NOT in while True:
print(f"topic: {topic}, message: {message}")
if topic == light_sound_feed: # robot directions
if message == "organic":
print("here")
activate("organic.mp3", GREEN, 1.0)
elif message == "glass":
activate("glass.mp3", AQUA, 1.0)
elif message == "paper":
activate("paper.mp3", WHITE, 1.0)
elif message == "metal":
activate("metal.mp3", YELLOW, 1.0)
elif message == "plastic":
activate("plastic.mp3", BLUE, 1.0)
elif message == "hazard":
activate("hazard.mp3", RED, 1.0)
elif message == "landfill":
activate("landfill.mp3", (105,105,105), 0.1)
elif message == "compost":
activate("compost.mp3", (139,69,19), 1.0)
  1. Executes actions based on the MQTT message received:
  2. Each message corresponds to a trash type (e.g., "organic," "glass") and triggers the activate function with specific settings
  3. Connection to WiFi & Dashboard
# Connect to WiFi
print(f"Connecting to WiFi")
wifi.radio.connect(os.getenv("WIFI_SSID"), os.getenv("WIFI_PASSWORD"))
print("Connected!")
strip.fill(0)

# Create a socket pool
pool = socketpool.SocketPool(wifi.radio)

# Set up a MiniMQTT Client - this is our current program that subscribes or "listens")
mqtt_client = MQTT.MQTT(
broker=os.getenv("BROKER"),
port=os.getenv("PORT"),
username=aio_username,
password=aio_key,
socket_pool=pool,
ssl_context=ssl.create_default_context(),
)

mqtt_client.connect()

# Setup the "callback" mqtt methods above
mqtt_client.on_connect = connected
mqtt_client.on_disconnect = disconnected
mqtt_client.on_message = message

# Connect to the MQTT broker (adafruit io for us)
print("Connecting to Adafruit IO...")
mqtt_client.connect()
  1. Connects to a Wi-Fi network using credentials from environment variables.
  2. Initializes the MQTT client using:
  3. Broker and port from environment variables.
  4. Authentication with Adafruit IO username and key.
  5. Socket pool for network communication.
  6. Main Loop
mqtt_client.publish(light_sound_feed + "/get", "")
print("Robot Working!")

while True:
mqtt_client.loop(2)
  1. Sends an initial MQTT request to fetch the latest state of the light_sound_feed.
  2. Continuously listens for MQTT messages in the main loop.

ASSEMBLE EVERYTHING

Screenshot 2024-12-10 at 5.24.09 PM.png
Screenshot 2024-12-10 at 5.24.35 PM.png

Now, the last step: put everything together!

  1. Squeeze in the light strips
  2. Carefully insert the light strips into the gap between the wooden and acrylic box.
  3. Tip: It might be challenging—go slowly to avoid damaging the strips or bending them too sharply.
  4. Make sure the wires of the light strips come out through the pre-made holes in the box.
  5. Wire in
  6. Light Strips
  7. Use 3 male-to-male jumper wires for the connections:
  8. Black wire: Connect to the GND pin on the breadboard.
  9. Red wire: Connect to the 5V pin.
  10. White wire: Connect to GP15 on the Pico W.
  11. Speaker
  12. Use 2 male-to-alligator wires:
  13. Bottom speaker wire: Attach to GND on the Pico W.
  14. Top speaker wire: Attach to GP16.
  15. Optional: Attach the Camera Holder
  16. If you have the camera holder and want to secure it, position the holder at the top of the box
  17. Use a glue gun to fix it in place
  18. However, if you would like to just hold the camera holder with your hands, that's perfectly fine!

Done!

Screenshot 2024-12-10 at 1.57.21 PM.png