Skip to content

gstavya/lingolearn-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LingoLearn AI

An intelligent language learning application that helps users practice speaking in different languages through AI-powered conversations. The app provides real-time speech recognition, translation, and personalized feedback to improve language fluency.

✨ Features

🎀 Speech Recognition & Practice

  • Real-time speech recognition in multiple languages (Spanish, French, Hindi, English)
  • Voice-to-text conversion with Google Speech Recognition API
  • Interactive conversations with AI native speakers
  • Text-to-speech responses in the target language

πŸ€– AI-Powered Learning

  • OpenAI GPT-3.5 Turbo integration for natural conversations
  • Contextual responses that guide conversations on specific topics
  • Automatic translation to English for better understanding
  • Personalized feedback with fluency scoring (1-10 scale)

πŸ“Š Progress Tracking

  • Session statistics including time spent and number of practice sessions
  • Performance analytics with average fluency scores
  • Detailed feedback history with specific improvement suggestions
  • Firebase integration for persistent data storage

πŸ” User Authentication

  • Firebase Authentication with email/password and Google sign-in
  • Secure session management
  • User dashboard with personalized experience

πŸš€ Getting Started

Prerequisites

  • Python 3.8 or higher
  • Microphone for speech input
  • Speakers for audio output
  • OpenAI API key
  • Firebase project with authentication and Firestore enabled

Installation

  1. Clone the repository

    git clone <repository-url>
    cd LingoLearn-AI
  2. Create and activate virtual environment

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies

    pip install -r requirements.txt
  4. Set up environment variables Create a .env file in the root directory:

    OPENAI_API_KEY=your_openai_api_key_here
  5. Configure Firebase

    • Download your Firebase service account key as key.json
    • Place it in the root directory
    • Enable Authentication and Firestore in your Firebase console

Running the Application

Option 1: Flask Web App

python app.py

Navigate to http://localhost:5000 in your browser.

Option 2: Streamlit Interface

streamlit run app2.py

The Streamlit interface will open automatically in your browser.

🎯 How to Use

Web Interface (Flask)

  1. Login with your email or Google account
  2. Select your target language (Spanish, French, Hindi)
  3. Click "Speak" to start voice recognition
  4. Have a conversation with the AI native speaker
  5. End the session to receive personalized feedback
  6. View your progress in the dashboard

Streamlit Interface

  1. Launch the app and select "Chat" from the sidebar
  2. Set your language by clicking "Detect Language" or manually selecting
  3. Click "Speak" to begin voice recognition
  4. Practice conversation with the AI assistant
  5. End conversation to get detailed feedback
  6. Check statistics in the Statistics page

πŸ› οΈ Technology Stack

  • Backend: Python, Flask, Streamlit
  • AI/ML: OpenAI GPT-3.5 Turbo, LangChain
  • Speech Processing: SpeechRecognition, Google Speech API
  • Database: Firebase Firestore
  • Authentication: Firebase Auth
  • Frontend: HTML, CSS, JavaScript, Bootstrap

πŸ“ Project Structure

LingoLearn AI/
β”œβ”€β”€ app.py                 # Flask web application
β”œβ”€β”€ app2.py               # Streamlit interface
β”œβ”€β”€ requirements.txt      # Python dependencies
β”œβ”€β”€ key.json             # Firebase service account key
β”œβ”€β”€ .env                 # Environment variables
β”œβ”€β”€ static/              # Static assets
β”‚   β”œβ”€β”€ css/
β”‚   └── js/
β”œβ”€β”€ templates/           # HTML templates
β”‚   β”œβ”€β”€ base.html
β”‚   β”œβ”€β”€ index.html
β”‚   └── dashboard.html
└── venv/               # Virtual environment

πŸ”§ Configuration

OpenAI API Setup

  1. Sign up at OpenAI
  2. Generate an API key
  3. Add it to your .env file

Firebase Setup

  1. Create a Firebase project at Firebase Console
  2. Enable Authentication and Firestore
  3. Download service account key as key.json
  4. Configure authentication methods (Email/Password, Google)

Speech Recognition

The app uses Google's Speech Recognition API which requires an internet connection. Supported languages:

  • Spanish (es)
  • French (fr)
  • Hindi (hi)
  • English (en)

πŸ“Š Features in Detail

AI Conversation System

  • Contextual responses: AI maintains conversation context
  • Topic guidance: AI introduces specific topics and questions
  • Brief responses: Keeps conversations natural and engaging
  • Language-specific voices: Different TTS voices for each language

Feedback System

  • Fluency scoring: 1-10 rating based on conversation quality
  • Specific improvements: 3 targeted suggestions with conversation citations
  • Progress tracking: Historical data and performance trends
  • Session analytics: Time spent, session count, average scores

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • OpenAI for providing the GPT-3.5 Turbo API
  • Google for Speech Recognition and Text-to-Speech services
  • Firebase for authentication and database services
  • LangChain for AI/ML framework
  • Streamlit for the interactive web interface

πŸ“ž Support

If you encounter any issues or have questions:

  1. Check the Issues page
  2. Create a new issue with detailed description
  3. Include error messages and system information

Happy Learning! πŸŒπŸ—£οΈ

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors