A revolutionary AI-powered system for real-time emotion, stress, ADHD, and accessibility detection with analytics and visualization capabilities, ready for Flask integration.
- Emotion Recognition: Real-time facial emotion detection using DeepFace
- Stress Level Monitoring: Multi-factor stress assessment
- ADHD Pattern Detection: Movement and attention pattern analysis
- Accessibility Assessment: Visual, motor, and cognitive accessibility needs detection
- Attention & Engagement Tracking: Focus stability and engagement level monitoring
- Real-time Analytics: Live data processing and storage
- Tiny Graphs: Flask-ready Plotly visualizations
- Performance Metrics: Comprehensive KPI tracking
- AI-Powered Insights: Automated pattern recognition and recommendations
- Critical Alerts: Immediate notifications for concerning patterns
- REST API: Complete API endpoints for web integration
- Dashboard: Ready-to-use web dashboard
- Real-time Updates: Live data streaming capabilities
- Export Functionality: Session data export for analysis
pip install -r requirements.txt- OpenCV (camera and computer vision)
- MediaPipe (face and hand tracking)
- DeepFace (emotion recognition)
- Plotly (interactive visualizations)
- Pandas (data analysis)
- NumPy (numerical computing)
- Flask (web framework)
from adaptive_accessibility import AdaptiveLearningCV
# Initialize the tracker
tracker = AdaptiveLearningCV()
# Process a frame (from camera or video)
result = tracker.process_frame(frame)
# Get analytics report
report = tracker.generate_analytics_report()from adaptive_accessibility import FlaskAccessibilityAPI
# Initialize Flask API
flask_api = FlaskAccessibilityAPI()
# Start tracking
result = flask_api.start_tracking()
# Get dashboard data
dashboard_data = flask_api.get_analytics_dashboard_data()
# Get tiny graphs for web display
graphs = flask_api.get_tiny_graphs_json()Run the built-in demo:
python adaptive_accessibility.pyChoose from:
- Camera Tracking: Live webcam analysis
- Flask Integration Demo: Generate sample data and test web integration
- Sample Data Generation: Create analytics data for testing
python flask_example.pyThen open http://localhost:5000 for the interactive dashboard.
POST /api/start-tracking- Start camera trackingPOST /api/stop-tracking- Stop trackingGET /api/current-data- Get real-time data
GET /api/dashboard-data- Complete dashboard dataGET /api/tiny-graphs- Plotly visualizationsGET /api/recommendations- Accessibility recommendationsGET /api/insights- AI-powered insights
POST /api/demo-data- Generate demo dataGET /api/export-session- Export session analytics
Main computer vision and analytics engine.
Key Methods:
process_frame(frame)- Analyze single video framegenerate_analytics_report()- Create comprehensive analyticsget_flask_interface_data()- Format data for web display
Web integration helper class.
Key Methods:
start_tracking()/stop_tracking()- Control trackingget_analytics_dashboard_data()- Dashboard dataget_tiny_graphs_json()- Web-ready visualizationssimulate_demo_data()- Generate test data
LearningState- Current user stateAccessibilityAssessment- Accessibility needs analysis
Shows breakdown of detected emotions over time.
Multi-line chart tracking stress, attention, and engagement.
Gauge chart showing current ADHD risk assessment.
Heat map of visual strain, cognitive load, and stress levels.
Radar chart showing attention, engagement, motor precision, and focus stability.
- Average stress levels and trends
- Dominant emotions and emotional diversity
- ADHD risk categorization
- Attention stability metrics
- Visual strain indicators
- Cognitive load assessments
- Automatic pattern detection
- Trend analysis
- Risk assessment
- Performance optimization suggestions
- Stress management techniques
- ADHD support strategies
- Attention improvement methods
- Visual comfort adjustments
- Accessibility adaptations
- Adaptive learning platforms
- Student engagement monitoring
- Accessibility compliance
- Personalized content delivery
- ADHD assessment tools
- Stress monitoring systems
- Accessibility evaluation
- Patient engagement tracking
- Employee well-being monitoring
- Ergonomic assessments
- Productivity optimization
- Stress intervention systems
- Human-computer interaction studies
- Accessibility research
- Educational effectiveness analysis
- Behavioral pattern studies
# Specify camera index
flask_api.start_tracking(camera_index=0)# Generate report for specific time period
report = tracker.generate_analytics_report(time_window_hours=24)
# Get graphs for specific timeframe
graphs = flask_api.get_tiny_graphs_json(time_window_hours=2)# Generate sample data for testing
result = flask_api.simulate_demo_data(duration_minutes=5)session_data = tracker.export_session_data()
# Contains: emotions, attention, movements, analytics, alertsanalytics_report = flask_api.export_session_analytics()
# Contains: complete analytics, visualizations, insights, recommendations- Frame Processing: System processes every 3rd frame for optimal performance
- Data Storage: Maintains rolling buffer of 1000 recent data points
- Background Processing: Camera tracking runs in separate thread
- Memory Management: Automatic cleanup of old data and alerts
- Camera Access: Ensure camera permissions are granted
- Dependencies: Install all required packages from requirements.txt
- Performance: Reduce frame processing frequency if needed
- Unicode Issues: System automatically handles emoji display issues
- All API endpoints include comprehensive error handling
- Graceful degradation when camera is unavailable
- Fallback mechanisms for failed emotion detection
from flask import Flask, jsonify
from adaptive_accessibility import FlaskAccessibilityAPI
app = Flask(__name__)
tracker = FlaskAccessibilityAPI()
@app.route('/api/status')
def get_status():
data = tracker.get_real_time_data()
return jsonify(data)
@app.route('/api/visualizations')
def get_visualizations():
graphs = tracker.get_tiny_graphs_json()
return jsonify(graphs)
if __name__ == '__main__':
app.run(debug=True)This system represents a breakthrough in adaptive accessibility technology, providing real-time insights that can transform how we approach inclusive education and human-computer interaction.
This is a Streamlit application that uses the Gemini API to perform various tasks on a document or a website.
- Install dependencies:
pip install -r requirements.txt
- Set up your API key:
- Create a file named
.envin the root of the project. - Add your Gemini API key to the
.envfile like this:GEMINI_API_KEY="YOUR_API_KEY_HERE"
- Create a file named
streamlit run streamlit_app.pyThis will open the Gemini Playground in your web browser.