Skip to content

billenert/hacknyu2025

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🦉 HorusAI

HackNYU 2025 Project

Empowering the visually impaired to locate and identify everyday objects using voice-guided YOLOv8 object detection.

HorusAI leverages state-of-the-art AI and computer vision to provide real-time auditory feedback for object recognition, improving accessibility and independence.


🚀 Features

  • 🎤 Voice-Guided Interaction – Uses OpenAI Whisper for speech-to-text processing.
  • 🔍 Real-Time Object Detection – Powered by YOLOv8 for accurate identification.
  • 🗣 AI-Powered Descriptions – OpenAI API enhances object recognition with contextual information.
  • 🏗 User-Friendly Interface – Built with Streamlit for easy deployment and accessibility.
  • 📦 Lightweight & Scalable – Cloud integration with MongoDB Atlas for data management.
  • 🎮 Audio Feedback – Pygame enables dynamic sound responses.

🛠️ Technologies Used

Technology Purpose
🐍 Python Core programming language
🎨 Streamlit Web application framework
🤖 OpenAI API AI-powered descriptions
🗣 OpenAI Whisper Speech-to-text processing
📦 MongoDB Atlas Cloud-based database
🎯 YOLOv8 Real-time object detection
🎵 Pygame Audio feedback system
📸 OpenCV Image processing

📂 Project Structure

📁 HorusAI
│── 📝 README.md – Project documentation
│── 📜 newapp.py – Main application script
│── 🎯 yolov8n.pt – General pre-trained YOLOv8 model
│── 🔬 best.pt – Custom YOLOv8 model trained on glasses dataset

📌 Pretrained Model Dataset:
YOLOv8 Glasses Dataset on Kaggle


🎯 How to Use

  1. Install dependencies:
    pip install -r requirements.txt
  2. Run app in Streamlit!
    streamlit run newapp.py

About

hackathon project for hacknyu2025

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages