A real-time obstacle detection system with haptic feedback, designed for visual assistance applications.
https://youtu.be/NBzJI06XH1o?feature=shared
- Real-time Detection: Monitor three zones (LEFT, CENTER, RIGHT) for obstacles
- WebSocket Communication: Low-latency real-time data streaming
- Phone Camera Support: Stream camera feed from your phone using Expo Go
- Dual Interface:
- Web UI with live video feed and visual alerts
- Terminal UI for quick status monitoring
- Haptic Feedback System: Zone-specific alerts for obstacle detection
- Distance Monitoring: Track obstacle distances in centimeters
glaucogauard/
βββ backend/
β βββ server.py # WebSocket server with detection logic
β βββ terminal_ui.py # Terminal-based monitoring interface
βββ web/
β βββ index.html # Web-based UI with live feed
βββ mobile/ # Expo app for phone camera streaming
β βββ App.js # React Native camera app
β βββ package.json # Mobile app dependencies
β βββ README.md # Mobile app setup guide
βββ requirements.txt # Python dependencies
βββ README.md # This file
βββ SETUP_PHONE_CAMERA.md # Phone camera setup guide
pip install -r requirements.txtpython backend/server.pyThe WebSocket server will start on ws://localhost:8765
Open web/index.html in a modern web browser. The UI will:
- Automatically connect to the WebSocket server
- Display real-time detection data
- Show video feed from phone camera (if connected)
To stream your phone's camera to the laptop:
- See
SETUP_PHONE_CAMERA.mdfor detailed instructions - Quick steps:
- Find your laptop's IP address
- Update
mobile/App.jswith your IP - Install dependencies:
cd mobile && npm install - Run:
npm startand scan QR code with Expo Go - Connect and start streaming from the phone app
For a lightweight monitoring interface:
python backend/terminal_ui.pyThe web interface provides:
- Live Video Feed: Real-time camera display
- Zone Indicators: Visual alerts for LEFT, CENTER, and RIGHT zones
- Distance Display: Shows obstacle distances in centimeters
- Status Panel:
- System status (ACTIVE/OFFLINE)
- Detection count
- Haptic feedback status
- Connection Indicator: Shows WebSocket connection state
- Active Zones: Red pulsing animation when obstacle detected < 200cm
- Auto-Reconnect: Automatically reconnects if connection is lost
- Responsive Design: Adapts to different screen sizes
Simple text-based interface showing:
=== Visionary STATUS ===
LEFT: π΄ OBSTACLE DETECTED (120cm)
CENTER: β
CLEAR
RIGHT: β
CLEAR
HAPTIC: LEFT ACTIVE
==========================
Status Indicators:
- π΄ DANGER: Obstacle < 100cm
- π‘ WARNING: Obstacle 100-200cm
- β CLEAR: No obstacle or > 200cm
The current implementation includes simulation mode for testing. In production:
-
Replace simulation with actual sensors:
- Ultrasonic sensors
- LiDAR
- Depth cameras (RealSense, etc.)
-
Integrate computer vision:
- Object detection models
- Depth estimation
- Semantic segmentation
-
Add haptic hardware:
- Vibration motors
- Haptic feedback controllers
- Zone-specific actuators
{
"timestamp": "2025-11-08T12:34:56.789",
"detection": true,
"zones": {
"left": 120,
"center": null,
"right": 250
},
"haptic": "left"
}Fields:
timestamp: ISO 8601 timestampdetection: Boolean indicating if any obstacle detected < 200cmzones: Distance in cm for each zone (null if no obstacle)haptic: Active haptic zone ("left", "center", "right", or "none")
Currently, the server accepts client commands for future expansion:
{
"command": "start_detection",
"params": {}
}Edit backend/server.py:
server = VisionaryServer(
host="localhost", # Change to "0.0.0.0" for network access
port=8765 # Change port if needed
)Adjust in server.py β simulate_detection():
- Detection frequency: Change
await asyncio.sleep(0.5)indetection_loop() - Danger threshold: Modify distance comparisons (currently 200cm)
- Zone ranges: Adjust random distance generation
Edit web/index.html:
- Colors: Modify CSS variables in
<style>section - WebSocket URL: Change
ws://localhost:8765in JavaScript - Update frequency: Adjust reconnection interval (currently 3000ms)
Replace the simulate_detection() method in server.py:
def get_sensor_data(self) -> Dict[str, Any]:
"""Connect to actual sensors"""
# Example: Read from ultrasonic sensors
left_distance = self.read_ultrasonic_sensor(pin=LEFT_SENSOR_PIN)
center_distance = self.read_ultrasonic_sensor(pin=CENTER_SENSOR_PIN)
right_distance = self.read_ultrasonic_sensor(pin=RIGHT_SENSOR_PIN)
return {
"zones": {
"left": left_distance,
"center": center_distance,
"right": right_distance
}
}Integrate with OpenCV or other CV libraries:
import cv2
import numpy as np
def detect_obstacles_cv(self, frame):
"""Use computer vision for obstacle detection"""
# Implement depth estimation, object detection, etc.
passCreate tests in the tests/ directory:
python -m pytest tests/- Python 3.8+
- Modern web browser
- Webcam (optional, for video feed)
- Raspberry Pi 4 or equivalent
- Ultrasonic sensors (HC-SR04 or similar) x3
- Haptic feedback motors x3
- Camera module (optional)
- Ensure the server is running:
python backend/server.py - Check firewall settings
- Verify the port (8765) is not in use
- Try accessing from
http://(nothttps://) for local testing
- Check browser permissions
- Ensure HTTPS or localhost (required for camera access)
- Try different browser
- Check browser console for errors (F12)
- Verify WebSocket connection status (top-right indicator)
- Check server logs for errors
MIT License - See LICENSE file for details
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
- Add machine learning-based object detection
- Implement real-time depth mapping
- Add audio feedback system
- Create mobile app version
- Add data logging and analytics
- Implement multi-user support
- Add configuration web interface
- Integration with smart glasses
- Cloud-based processing option
Built for accessibility and safety π¦―