Stream Raspberry Pi Camera and Microphone to React Web App

by Orange Digital Center in Circuits > Raspberry Pi

16 Views, 0 Favorites, 0 Comments

Stream Raspberry Pi Camera and Microphone to React Web App

FL9ZDDEMC2HWNQ1.jpeg

This project was developed within the Orange Digital Center Morocco , a space dedicated to fostering innovation, creativity, and rapid prototyping. At the FabLab, individuals and teams have access to state-of-the-art tools, including 3D printers, laser cutters, and a variety of electronic and mechanical resources. The center provides a collaborative environment where innovators, entrepreneurs, and students can transform their ideas into tangible products. By focusing on sustainable and impactful solutions .

Build a real-time video and audio streaming system using WebRTC for remote monitoring, baby monitoring, pet cams, or telepresence applications.

Supplies

Supplies

Supplies

Supplies

Supplies

Hardware Required

  1. Raspberry Pi 4 (recommended) or Pi 3B+
  2. MicroSD Card (32GB or larger, Class 10)
  3. USB Camera or Pi Camera Module
  4. USB Microphone or USB audio adapter with microphone
  5. Power Supply for Raspberry Pi
  6. Ethernet cable or WiFi connection
  7. Computer/Phone with modern web browser

Software Required

  1. Raspberry Pi OS (latest version)
  2. Python 3.7+ (included with Pi OS)
  3. Node.js and npm (for React app)
  4. Modern web browser (Chrome, Firefox, Safari, Edge)

Set Up Your Raspberry Pi

imager.png

Install Raspberry Pi OS

  1. Download Raspberry Pi Imager from rpi.org
  2. Flash Raspberry Pi OS (64-bit) to your microSD card
  3. Enable SSH and WiFi in imager settings if needed
  4. Boot your Pi and complete initial setup

Enable Camera (if using Pi Camera)

sudo raspi-config
# Navigate to: Interface Options → Camera → Enable
sudo reboot

Update System

sudo apt update && sudo apt upgrade -y



Connect Hardware

images.jpeg

Camera Connection - USB Camera

  1. Plug USB camera into any USB port
  2. Verify with: ls /dev/video* (should show /dev/video0)

Microphone Connection - USB Microphone

  1. Plug USB microphone into USB port
  2. Best option for audio quality

Test Hardware

# Test camera
fswebcam test_image.jpg

# List audio devices
arecord -l

# Test microphone (record 3 seconds)
arecord -d 3 -f cd test_audio.wav
aplay test_audio.wav


Install Python Dependencies

images (1).png

Install System Packages

# Audio and development packages
sudo apt install -y python3-dev python3-pip libasound2-dev portaudio19-dev

# Audio utilities
sudo apt install -y alsa-utils pulseaudio pulseaudio-utils

# Optional: Fix audio configuration
sudo alsactl init

Create Virtual Environment

# Create project directory
mkdir ~/pi-streamer
cd ~/pi-streamer

# Create virtual environment
python3 -m venv myenv
source myenv/bin/activate

Install Python Libraries

# Core streaming libraries
pip install aiortc opencv-python pyaudio numpy websockets asyncio

# Additional dependencies
pip install av fractions



Create the Raspberry Pi Streaming Server

images (1).jpeg

Create pi_streamer.py:

import asyncio
import json
import logging
import cv2
import numpy as np
import websockets
import pyaudio
import os
import sys
from aiortc import RTCPeerConnection, RTCSessionDescription, VideoStreamTrack, RTCIceCandidate, MediaStreamTrack
from av import VideoFrame
from av.audio.frame import AudioFrame
import fractions

# Suppress ALSA warnings
class ALSAErrorSuppress:
def __enter__(self):
self.original_stderr = sys.stderr
sys.stderr = open(os.devnull, 'w')
return self

def __exit__(self, exc_type, exc_val, exc_tb):
sys.stderr.close()
sys.stderr = self.original_stderr

# Configuration
CAMERA_DEVICE = 0 # Change if your camera is on different device
CAMERA_WIDTH = 640
CAMERA_HEIGHT = 480
CAMERA_FPS = 30
WEBSOCKET_PORT = 8765

# Audio settings
AUDIO_SAMPLE_RATE = 48000
AUDIO_CHUNK = 960 # 20ms at 48kHz
AUDIO_CHANNELS = 1

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("pi-streamer")

class CameraStreamTrack(VideoStreamTrack):
def __init__(self):
super().__init__()
self.cap = None
self.frame_count = 0
self._init_camera()

def _init_camera(self):
try:
self.cap = cv2.VideoCapture(CAMERA_DEVICE)
if self.cap.isOpened():
self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, CAMERA_WIDTH)
self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, CAMERA_HEIGHT)
self.cap.set(cv2.CAP_PROP_FPS, CAMERA_FPS)
self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
logger.info("✅ Camera ready")
else:
logger.warning("⚠️ Camera not available, using test pattern")
self.cap = None
except Exception as e:
logger.error(f"Camera error: {e}")
self.cap = None

async def recv(self):
pts, time_base = await self.next_timestamp()

if self.cap and self.cap.isOpened():
ret, frame = self.cap.read()
if ret:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
else:
frame = self._test_pattern()
else:
frame = self._test_pattern()

video_frame = VideoFrame.from_ndarray(frame, format="rgb24")
video_frame.pts = pts
video_frame.time_base = time_base

return video_frame

def _test_pattern(self):
"""Generate test pattern if camera fails"""
self.frame_count += 1
frame = np.zeros((CAMERA_HEIGHT, CAMERA_WIDTH, 3), dtype=np.uint8)

# Gradient background
for y in range(CAMERA_HEIGHT):
frame[y, :] = [y//2, 100, 200 - y//3]

# Moving circle
x = int((self.frame_count * 3) % CAMERA_WIDTH)
y = int(CAMERA_HEIGHT // 2 + 50 * np.sin(self.frame_count * 0.1))
cv2.circle(frame, (x, y), 20, (255, 255, 0), -1)

# Text overlay
cv2.putText(frame, "LIVE TEST PATTERN", (20, 40),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255, 255, 255), 2)
cv2.putText(frame, f"Frame: {self.frame_count}", (20, 80),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 255), 1)

return frame

def stop(self):
if self.cap:
self.cap.release()
logger.info("📹 Camera released")

class MicrophoneStreamTrack(MediaStreamTrack):
kind = "audio"

def __init__(self):
super().__init__()
self.sample_rate = AUDIO_SAMPLE_RATE
self.chunk = AUDIO_CHUNK
self.channels = AUDIO_CHANNELS
self.pts = 0
self.time_base = fractions.Fraction(1, self.sample_rate)

self.p = None
self.stream = None
self.running = True
self._init_audio()

def _init_audio(self):
try:
with ALSAErrorSuppress():
self.p = pyaudio.PyAudio()

input_device = self._find_input_device()

if input_device is not None:
with ALSAErrorSuppress():
self.stream = self.p.open(
format=pyaudio.paInt16,
channels=self.channels,
rate=self.sample_rate,
input=True,
input_device_index=input_device,
frames_per_buffer=self.chunk
)
logger.info(f"🎤 Audio input ready (device {input_device})")
else:
logger.warning("⚠️ No audio input found, using silence")
self.stream = None

except Exception as e:
logger.error(f"❌ Audio input failed: {e}")
self.stream = None

def _find_input_device(self):
if not self.p:
return None

try:
for i in range(self.p.get_device_count()):
try:
with ALSAErrorSuppress():
device_info = self.p.get_device_info_by_index(i)

if device_info.get('maxInputChannels', 0) > 0:
device_name = device_info.get('name', 'Unknown')
logger.info(f"Testing input device {i}: {device_name}")

try:
with ALSAErrorSuppress():
test_stream = self.p.open(
format=pyaudio.paInt16,
channels=self.channels,
rate=self.sample_rate,
input=True,
input_device_index=i,
frames_per_buffer=self.chunk
)
test_stream.close()
logger.info(f"✅ Device {i} works: {device_name}")
return i
except:
continue
except:
continue
except Exception as e:
logger.error(f"Error finding input device: {e}")

return None

async def recv(self):
if not self.running:
raise ConnectionError("Audio track stopped")

try:
if self.stream and self.running:
with ALSAErrorSuppress():
data = self.stream.read(self.chunk, exception_on_overflow=False)
else:
data = np.zeros(self.chunk, dtype=np.int16).tobytes()

frame = AudioFrame(format="s16", layout="mono", samples=self.chunk)
frame.sample_rate = self.sample_rate
frame.planes[0].update(data)
frame.pts = self.pts
frame.time_base = self.time_base
self.pts += self.chunk

return frame

except Exception as e:
logger.warning(f"Audio error: {e}")
silence = np.zeros(self.chunk, dtype=np.int16).tobytes()
frame = AudioFrame(format="s16", layout="mono", samples=self.chunk)
frame.sample_rate = self.sample_rate
frame.planes[0].update(silence)
frame.pts = self.pts
frame.time_base = self.time_base
self.pts += self.chunk
return frame

def stop(self):
self.running = False
if self.stream:
try:
with ALSAErrorSuppress():
self.stream.stop_stream()
self.stream.close()
logger.info("🎤 Audio input stopped")
except:
pass

async def handle_client(websocket):
client_id = f"client_{id(websocket)}"
logger.info(f"🔗 {client_id} connected")

pc = None
camera_track = None
audio_track = None

try:
async for message in websocket:
try:
data = json.loads(message)
msg_type = data.get('type', 'unknown')

if msg_type == 'offer':
logger.info("🎬 Processing offer...")

try:
pc = RTCPeerConnection()

camera_track = CameraStreamTrack()
audio_track = MicrophoneStreamTrack()

pc.addTrack(camera_track)
pc.addTrack(audio_track)
logger.info("📹🎤 Video and audio tracks added")

@pc.on("icecandidate")
async def on_ice(candidate):
if candidate:
try:
await websocket.send(json.dumps({
'type': 'ice-candidate',
'candidate': {
'candidate': candidate.candidate,
'sdpMid': candidate.sdpMid,
'sdpMLineIndex': candidate.sdpMLineIndex
}
}))
except Exception as e:
logger.error(f"❌ ICE send failed: {e}")

@pc.on("connectionstatechange")
async def on_state():
state = pc.connectionState
logger.info(f"🔄 State: {state}")
if state == "connected":
logger.info("🎉 CONNECTED! Streaming video + audio!")
elif state == "failed":
logger.error("❌ Connection failed")

logger.info("📥 Setting remote description...")
offer = RTCSessionDescription(sdp=data['sdp'], type=data['type'])
await pc.setRemoteDescription(offer)
logger.info("✅ Remote description set")

logger.info("📤 Creating answer...")
answer = await pc.createAnswer()
await pc.setLocalDescription(answer)
logger.info("✅ Answer created")

response = {
'type': 'answer',
'sdp': pc.localDescription.sdp
}
await websocket.send(json.dumps(response))
logger.info("✅ Answer sent successfully")

except Exception as e:
logger.error(f"❌ Offer processing failed: {e}")
continue

elif msg_type == 'ice-candidate' and pc:
try:
cand_data = data.get('candidate')
if cand_data and 'candidate' in cand_data:
candidate = RTCIceCandidate(
candidate=cand_data['candidate'],
sdpMid=cand_data.get('sdpMid'),
sdpMLineIndex=cand_data.get('sdpMLineIndex')
)
await pc.addIceCandidate(candidate)
logger.debug("🧊 ICE candidate added")
except Exception as e:
logger.error(f"❌ ICE candidate error: {e}")

except json.JSONDecodeError:
logger.error("❌ Invalid JSON received")
except Exception as e:
logger.error(f"❌ Message processing error: {e}")

except websockets.exceptions.ConnectionClosed:
logger.info(f"👋 {client_id} disconnected")
except Exception as e:
logger.error(f"❌ Client handler error: {e}")
finally:
logger.info(f"🧹 Cleaning up {client_id}")
if audio_track:
audio_track.stop()
if camera_track:
camera_track.stop()
if pc:
try:
await pc.close()
except:
pass

async def main():
print("🚀 Raspberry Pi Video + Audio Streamer")
print("=" * 45)

# Camera test
cap = cv2.VideoCapture(CAMERA_DEVICE)
if cap.isOpened():
ret, frame = cap.read()
if ret:
print(f"✅ Camera OK: {frame.shape}")
else:
print("⚠️ Camera detected but can't read")
cap.release()
else:
print("⚠️ No camera - will use test pattern")

# Audio test
print("🔍 Testing audio devices...")
try:
with ALSAErrorSuppress():
p = pyaudio.PyAudio()

input_devices = []
for i in range(p.get_device_count()):
try:
with ALSAErrorSuppress():
info = p.get_device_info_by_index(i)
if info.get('maxInputChannels', 0) > 0:
input_devices.append(f" Input {i}: {info.get('name', 'Unknown')}")
except:
continue

print(f"📥 Input devices found ({len(input_devices)}):")
for device in input_devices:
print(device)

with ALSAErrorSuppress():
p.terminate()
except Exception as e:
print(f"❌ Audio device test failed: {e}")

print(f"🌐 Starting server on port {WEBSOCKET_PORT}")

server = await websockets.serve(handle_client, "0.0.0.0", WEBSOCKET_PORT)
print("✅ Server ready!")
print("📱 Connect from your web browser now")
print("=" * 45)

try:
await server.wait_closed()
except KeyboardInterrupt:
print("\n🛑 Shutting down...")
server.close()
await server.wait_closed()
print("👋 Goodbye!")

if __name__ == "__main__":
asyncio.run(main())

Test the Server

# Make sure you're in virtual environment
source ~/pi-streamer/myenv/bin/activate

# Run the streaming server
python pi_streamer.py

You should see:

🚀 Raspberry Pi Video + Audio Streamer
=============================================
✅ Camera OK: (480, 640, 3)
🔍 Testing audio devices...
📥 Input devices found (2):
Input 1: USB Audio Device
Input 2: bcm2835 Audio
🌐 Starting server on port 8765
✅ Server ready!
📱 Connect from your web browser now
=============================================


Create the React Web App

Set Up React + Vite Project

# On your computer (not Raspberry Pi)
npm create vite@latest pi-stream-viewer -- --template react
cd pi-stream-viewer
npm install

Install Tailwind CSS v4 (Modern Setup)

# Install Vite and Tailwind CSS v4
npm install vite@6.1.0
npm install tailwindcss @tailwindcss/vite

Configure Vite for Tailwind CSS v4

Update vite.config.js:
javascriptimport { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
import tailwindcss from '@tailwindcss/vite'

// https://vite.dev/config/
export default defineConfig({
plugins: [
react(),
tailwindcss()
],
define: {
global: {},
},
})

Setup Tailwind CSS

Empty src/index.css and add only this:

@import "tailwindcss";

Create the Stream Viewer Component (src/components/StreamViewer.jsx)

import React, { useEffect, useRef, useState } from 'react';

const StreamViewer = () => {
const [isConnected, setIsConnected] = useState(false);
const [isConnecting, setIsConnecting] = useState(false);
const [error, setError] = useState(null);
const [isFullscreen, setIsFullscreen] = useState(false);

const remoteVideoRef = useRef(null);
const remoteAudioRef = useRef(null);
const websocketRef = useRef(null);
const peerConnectionRef = useRef(null);
const containerRef = useRef(null);

// 🔧 UPDATE THIS IP ADDRESS
const RASPBERRY_PI_IP = '192.168.11.148'; // Change to your Pi's IP
const WEBSOCKET_PORT = 8765;

const iceServers = [
{ urls: 'stun:stun.l.google.com:19302' },
{ urls: 'stun:stun1.l.google.com:19302' }
];

const connectToRaspberryPi = async () => {
try {
setIsConnecting(true);
setError(null);

const ws = new WebSocket(`ws://${RASPBERRY_PI_IP}:${WEBSOCKET_PORT}`);
websocketRef.current = ws;

ws.onopen = async () => {
await initializeWebRTC();
};

ws.onmessage = async (event) => {
const data = JSON.parse(event.data);
await handleSignalingMessage(data);
};

ws.onerror = (error) => {
setError('Failed to connect to Raspberry Pi');
setIsConnecting(false);
};

ws.onclose = () => {
setIsConnected(false);
setIsConnecting(false);
};

} catch (error) {
setError(`Failed to connect: ${error.message}`);
setIsConnecting(false);
}
};

const initializeWebRTC = async () => {
try {
const pc = new RTCPeerConnection({ iceServers });
peerConnectionRef.current = pc;

pc.ontrack = (event) => {
const track = event.track;

if (track.kind === 'video' && remoteVideoRef.current) {
remoteVideoRef.current.srcObject = event.streams[0];
} else if (track.kind === 'audio' && remoteAudioRef.current) {
remoteAudioRef.current.srcObject = event.streams[0];
remoteAudioRef.current.volume = 0.8;
}
};

pc.onicecandidate = (event) => {
if (event.candidate && websocketRef.current) {
websocketRef.current.send(JSON.stringify({
type: 'ice-candidate',
candidate: event.candidate
}));
}
};

pc.onconnectionstatechange = () => {
const state = pc.connectionState;

if (state === 'connected') {
setIsConnected(true);
setIsConnecting(false);
} else if (state === 'failed') {
setError('WebRTC connection failed');
setIsConnecting(false);
} else if (state === 'disconnected' || state === 'closed') {
setIsConnected(false);
}
};

const offer = await pc.createOffer({
offerToReceiveVideo: true,
offerToReceiveAudio: true
});
await pc.setLocalDescription(offer);

websocketRef.current.send(JSON.stringify({
type: 'offer',
sdp: offer.sdp
}));

} catch (error) {
setError('Failed to initialize WebRTC');
setIsConnecting(false);
}
};

const handleSignalingMessage = async (data) => {
try {
const pc = peerConnectionRef.current;

if (data.type === 'answer') {
await pc.setRemoteDescription(new RTCSessionDescription({
type: data.type,
sdp: data.sdp
}));
} else if (data.type === 'ice-candidate') {
await pc.addIceCandidate(new RTCIceCandidate(data.candidate));
} else if (data.type === 'error') {
setError(`Server error: ${data.message}`);
}
} catch (error) {
console.error('Error handling signaling message:', error);
}
};

const disconnect = () => {
if (peerConnectionRef.current) {
peerConnectionRef.current.close();
peerConnectionRef.current = null;
}

if (websocketRef.current) {
websocketRef.current.close();
websocketRef.current = null;
}

if (remoteVideoRef.current) {
remoteVideoRef.current.srcObject = null;
}

if (remoteAudioRef.current) {
remoteAudioRef.current.srcObject = null;
}

setIsConnected(false);
setIsConnecting(false);
setError(null);
};

const toggleFullscreen = () => {
if (!document.fullscreenElement) {
containerRef.current?.requestFullscreen();
setIsFullscreen(true);
} else {
document.exitFullscreen();
setIsFullscreen(false);
}
};

useEffect(() => {
const handleFullscreenChange = () => {
setIsFullscreen(!!document.fullscreenElement);
};

document.addEventListener('fullscreenchange', handleFullscreenChange);
return () => document.removeEventListener('fullscreenchange', handleFullscreenChange);
}, []);

useEffect(() => {
return () => {
disconnect();
};
}, []);

return (
<div ref={containerRef} className="h-screen bg-gradient-to-br from-slate-900 via-slate-800 to-slate-900 flex flex-col overflow-hidden">
{/* Header */}
<div className="flex-shrink-0 bg-black/20 backdrop-blur-sm border-b border-white/10">
<div className="px-6 py-4">
<div className="flex items-center justify-between">
<div className="flex items-center space-x-3">
<div className="w-3 h-3 bg-red-500 rounded-full animate-pulse"></div>
<h1 className="text-xl font-bold text-white">
Raspberry Pi Remote Viewer
</h1>
{isConnected && (
<div className="flex items-center space-x-2 text-green-400 text-sm">
<div className="w-2 h-2 bg-green-400 rounded-full"></div>
<span>Connected</span>
</div>
)}
</div>

<div className="flex items-center space-x-3">
{!isConnected && !isConnecting && (
<button
onClick={connectToRaspberryPi}
className="bg-gradient-to-r from-blue-500 to-blue-600 hover:from-blue-600 hover:to-blue-700 text-white px-6 py-2.5 rounded-lg font-medium transition-all duration-200 shadow-lg hover:shadow-xl transform hover:scale-105"
>
<span className="flex items-center space-x-2">
<span>🔗</span>
<span>Connect</span>
</span>
</button>
)}

{isConnecting && (
<div className="flex items-center space-x-2 text-blue-400">
<div className="w-5 h-5 border-2 border-blue-400 border-t-transparent rounded-full animate-spin"></div>
<span className="font-medium">Connecting...</span>
</div>
)}

{isConnected && (
<div className="flex items-center space-x-2">
<button
onClick={toggleFullscreen}
className="bg-gray-700 hover:bg-gray-600 text-white px-3 py-2 rounded-lg transition-colors duration-200"
title={isFullscreen ? "Exit Fullscreen" : "Enter Fullscreen"}
>
{isFullscreen ? "⛶" : "⛶"}
</button>
<button
onClick={disconnect}
className="bg-gradient-to-r from-red-500 to-red-600 hover:from-red-600 hover:to-red-700 text-white px-4 py-2 rounded-lg font-medium transition-all duration-200 shadow-lg hover:shadow-xl"
>
<span className="flex items-center space-x-2">
<span>🛑</span>
<span>Disconnect</span>
</span>
</button>
</div>
)}
</div>
</div>

{error && (
<div className="mt-3 p-3 bg-red-500/20 border border-red-500/30 rounded-lg text-red-300">
<div className="flex items-center space-x-2">
<span>❌</span>
<span>{error}</span>
</div>
</div>
)}
</div>
</div>

{/* Main Content Area */}
<div className="flex-1 flex items-center justify-center p-6 min-h-0">
<div className="w-full h-full max-w-7xl mx-auto">
<div className="relative w-full h-full bg-black/50 backdrop-blur-sm rounded-2xl overflow-hidden shadow-2xl border border-white/10">
{!isConnected && !isConnecting && (
<div className="absolute inset-0 flex items-center justify-center">
<div className="text-center text-white/60">
<div className="text-6xl mb-4">📹</div>
<h3 className="text-xl font-medium mb-2">No Stream Connected</h3>
<p className="text-white/40">Click "Connect" to start viewing your Raspberry Pi stream</p>
</div>
</div>
)}

{isConnecting && (
<div className="absolute inset-0 flex items-center justify-center">
<div className="text-center text-white">
<div className="w-16 h-16 border-4 border-blue-400 border-t-transparent rounded-full animate-spin mx-auto mb-4"></div>
<h3 className="text-xl font-medium mb-2">Establishing Connection</h3>
<p className="text-white/60">Setting up WebRTC connection...</p>
</div>
</div>
)}

<video
ref={remoteVideoRef}
autoPlay
playsInline
className={`w-full h-full object-contain transition-opacity duration-500 ${
isConnected ? 'opacity-100' : 'opacity-0'
}`}
/>

{isConnected && (
<div className="absolute bottom-4 left-4 right-4">
<div className="bg-black/30 backdrop-blur-sm rounded-lg px-4 py-2">
<div className="flex items-center justify-between text-white text-sm">
<div className="flex items-center space-x-4">
<div className="flex items-center space-x-2">
<div className="w-2 h-2 bg-green-400 rounded-full animate-pulse"></div>
<span>Live Stream</span>
</div>
<span className="text-white/60">|</span>
<span className="text-white/60">IP: {RASPBERRY_PI_IP}</span>
</div>
</div>
</div>
</div>
)}
</div>
</div>
</div>

<audio ref={remoteAudioRef} autoPlay />
</div>
);
};

export default StreamViewer;

Change you App (src/App.jsx)

import StreamViewer from "./components/StreamViewer";

function App() {
return (
<div>
<StreamViewer />
</div>
);
}

export default App;

Test the React App

# Find your Raspberry Pi's IP address first
# On the Raspberry Pi, run: hostname -I

# Update the IP address in StreamViewer.jsx (line 12)
# const RASPBERRY_PI_IP = 'YOUR_PI_IP_HERE';

# Start the development server
npm run dev

Your Vite app will start on http://localhost:5173

Final Result – Stream in Action

After completing the setup, here's what your Raspberry Pi Remote Viewer will look like in action.

✅ When the connection is active, you will see a live video stream from your Raspberry Pi's camera along with the IP address displayed on screen.

❌ When disconnected, the interface shows a friendly message prompting you to start the stream.

This clear separation between connected and disconnected states enhances user feedback and improves the experience on both desktop and mobile.

🧪 Make sure your Raspberry Pi is on the same network and running the WebRTC streaming server script.