logX+: AI-Powered Log Analysis Platform

logX+ is a modern web application designed to simplify log analysis using the power of generative AI. It transforms raw, cryptic log files into clear, actionable intelligence reports, empowering developers and system administrators to diagnose and resolve issues faster than ever before.

The platform features Retrieval-Augmented Generation (RAG), using a history of past log analyses stored locally in your browser to provide richer, more context-aware insights on new incidents. It also includes a simulated on-chain verification mechanism to anchor a hash of the analysis report to the Solana devnet for immutable auditing.


Inspiration

The inspiration behind logX+ comes from a universal pain point for developers and system administrators: deciphering complex, lengthy, and often cryptic log files. When a system fails, time is critical. Sifting through thousands of lines of logs is tedious, error-prone, and requires significant expertise. We were inspired to build a tool that could act as an expert assistant, instantly providing clarity and actionable insights from raw log data, thereby drastically reducing debugging time and stress. The goal was to transform a reactive, manual process into a proactive, intelligent one.

What it does

logX+ is an AI-powered log analysis platform that transforms raw log files into a clear, actionable intelligence report.

  • AI-Powered Analysis: Users paste or upload log files, and the application leverages the Google Gemini API to intelligently parse and understand complex log data.
  • Structured Insights: The output isn't just a summary; it's a structured report including:
    • A concise, plain-English summary of the core problem.
    • A visual "Root Cause Chain" diagram, tracing events from the initial trigger to the final error.
    • An interactive pie chart breaking down errors by category.
    • A list of actionable recommendations, prioritized by criticality.
  • Contextual Awareness (RAG): It learns from past analyses. By maintaining a local history of incidents, it uses relevant past solutions (Retrieval-Augmented Generation) to provide smarter, more context-aware recommendations for new problems.
  • Interactive Guidance: Users can click on any recommendation to open a chat modal and receive step-by-step, AI-driven assistance to resolve the issue.
  • Immutable Auditing (Simulated): It offers a feature to generate a cryptographic hash of the analysis report and simulate anchoring it to the Solana blockchain, creating a verifiable, tamper-proof audit trail for incident reports.

How we built it

logX+ is a modern single-page application built with a focus on a clean user experience and powerful AI integration.

  • Frontend: We used React and TypeScript for a robust and type-safe component-based architecture. The UI is styled with Tailwind CSS for rapid, utility-first design.
  • Core AI Logic: The brain of the operation is the Google Gemini API (@google/genai). We engineered specific prompts and defined a strict JSON schema to ensure the model returns consistent, structured data for our UI to consume.
  • RAG Implementation: We simulated a Retrieval-Augmented Generation system using the browser's localStorage. The vectorDBService extracts keywords from new logs, finds similar past incidents in the stored history, and injects that context into the Gemini prompt to enhance its analytical capabilities.
  • Data Visualization: We used Recharts to create the dynamic error breakdown pie chart and built a custom React component to render the root cause analysis chain, providing a clear visual narrative of the issue.
  • Blockchain Simulation: The on-chain verification feature uses the Web Crypto API to perform SHA-256 hashing locally before simulating a transaction to the Solana devnet.

Challenges we ran into

  • Prompt Engineering & Output Structuring: Getting the AI to consistently return valid JSON that matched our complex, nested schema was a significant challenge. It required iterative prompt refinement, clear descriptions for each field in the schema, and setting a low temperature for more deterministic output.
  • Simulating RAG Effectively: Implementing a meaningful RAG system without a backend or a true vector database was tricky. Our localStorage-based keyword-matching approach is a simplification, but tuning the keyword extraction logic to be relevant without being too noisy was a key hurdle.
  • State Management for Streaming Chat: Handling the real-time updates for the streaming chat response required careful state management in React. We had to ensure the UI updated smoothly character by character without causing re-rendering issues or performance degradation.
  • UI/UX for Complex Data: Presenting a dense analysis report in a way that is intuitive and not overwhelming was a major design challenge. We opted for breaking it down into distinct, digestible Card components (Summary, Diagram, Chart, Recommendations) to guide the user's attention.

Accomplishments that we're proud of

  • End-to-End AI Workflow: We successfully built a complete workflow from raw, unstructured text input to a rich, interactive, and multi-faceted UI powered by structured AI output.
  • The Root Cause Diagram: We are particularly proud of the visual root cause chain. It's a powerful feature that transforms a list of events into an easy-to-follow story, making the root cause immediately apparent.
  • Practical RAG Implementation: Despite its simplicity, our localStorage-based RAG system demonstrably improves the quality of analysis by leveraging historical context, proving the value of the concept even in a client-side application.
  • Seamless User Experience: The ability to go from a raw log file to a step-by-step, interactive chat with an AI expert on a specific recommendation is a smooth and powerful user journey that we worked hard to perfect.

What we learned

  • The Power of Schemas: For application development, forcing an LLM to respond in a predefined JSON schema via responseSchema is a game-changer. It turns the LLM from a conversationalist into a reliable backend service that produces predictable, machine-readable data.
  • Context is King: The RAG implementation, even in its simulated form, reinforced how critical context is for AI accuracy. Providing the model with relevant examples from the past dramatically improved the quality and relevance of its suggestions.
  • UI/UX for AI: Building UIs for AI applications requires a different mindset. It's not just about displaying data; it's about managing loading states, handling streaming text, and building user trust by clearly presenting how the AI reached its conclusions.

What's next for logX+

  • Real Vector Database Integration: Replace the localStorage RAG system with a connection to a real vector database like Pinecone or ChromaDB for more powerful, semantic-based retrieval of past incidents.
  • Real-time Log Monitoring: Integrate with logging platforms (like Datadog, Splunk) to analyze logs in real-time, proactively identifying and reporting on issues as they happen.
  • Team Collaboration Features: Add functionality for teams to share, comment on, and archive analysis reports, turning logX+ into a collaborative incident management hub.
  • Automated Fixes: For certain classes of errors, explore providing AI-generated code snippets or CLI commands that users can directly apply to fix the issue, moving from recommendation to remediation.

🚀 Getting Started: Running Locally

Follow these instructions to set up and run the logX+ application on your local machine for development and testing.

Prerequisites

  • You must have Node.js (version 18 or newer) and npm installed.
  • You need a Google Gemini API Key. You can obtain one from Google AI Studio.

Step 1: Install Dependencies

Navigate to the project's root directory in your terminal and run the following command to install all the necessary packages:

npm install

Step 2: Set Up Your Gemini API Key

The application requires your Gemini API key to function.

  1. In the root directory of the project, create a new file named .env.
  2. Open the .env file and add your API key in the following format, replacing YOUR_API_KEY_HERE with your actual secret key:

    VITE_GEMINI_API_KEY="YOUR_API_KEY_HERE"
    

    Note: The VITE_ prefix is required for security reasons in Vite projects to expose variables to the client-side code.

Step 3: Run the Application

Once the dependencies are installed and your API key is set, you can start the development server:

npm run dev

This will launch the application. You can now open your web browser and navigate to the local address provided in your terminal (usually http://localhost:5173 or similar).


📁 Project Structure

/
├── components/         # Reusable React components
│   ├── analysis/       # Components for displaying analysis results
│   └── ui/             # Generic UI components (Button, Card)
├── services/           # Logic for external APIs
│   ├── geminiService.ts  # Handles all communication with the Gemini AI
│   ├── solanaService.ts  # Simulates Solana hashing and verification
│   └── vectorDBService.ts # Manages the local RAG knowledge base
├── App.tsx             # Main application component and state management
├── index.html          # The main HTML entry point
├── index.tsx           # React application root
└── types.ts            # TypeScript type definitions

Built With

Share this project:

Updates