Inspiration

Our inspiration was born from a passion for Formula 1 and a deep frustration with the status quo. We saw this massive chasm between the gigabytes of raw, complex telemetry data that High Performance Computing (HPC) systems produce after every lap and the actionable, split-second decisions a driver needs. In a sport where a thousandth of a second is the difference between winning and losing, it's agonizing to know that critical insights are buried in complex spreadsheets and log files, like the sample_telemetry.json and .csv files, often discovered by engineers hours after the race. Hastorium was inspired by the need to close that gap, to leverage cutting-edge AI to transform impossibly complex data into immediate, human-readable, and strategic recommendations for the driver.

What it does

Hastorium is an autonomous AI-powered race strategist that acts as a co-pilot for an F1 driver. It directly ingests the complex, raw telemetry output from HPC systems, seamlessly supporting both JSON and CSV formats. In seconds, it intelligently analyzes the entire data stream, evaluating multiple critical performance vectors:

  • Speed Profile: Calculates max speed, average speed, and speed variance.
  • Braking Efficiency: Identifies heavy braking events and average brake pressure.
  • Cornering Performance: Analyzes average and minimum cornering speeds.
  • Throttle Application: Measures full-throttle percentage and average throttle position.

But it doesn't just present a confusing dashboard of raw numbers. It translates this deep analysis into three simple, powerful outputs:

  1. An objective, overall Performance Score on a 0-100 scale, giving an instant benchmark of the lap's quality.
  2. A prioritized list of actionable recommendations in plain, natural language (e.g., "Increase straight-line speed" or "Optimize braking points").
  3. An optional MP3 audio summary of the decisions, generated via the integrated ElevenLabs Text-to-Speech API, allowing a driver or engineer to listen to the feedback without looking at a screen.

It empowers drivers with the strategic insights they need to optimize their very next lap.

How we built it

We built Hastorium in a single, intense 24-hour sprint using a highly modular, pipeline-based architecture in Python. This design was our crucial strategy, as it allowed us to parallelize our workflow and build rapidly. The system is built on four core components:

  1. HPCDataParser: A robust ingestion engine that parses JSON and CSV data, normalizes it, and extracts key feature vectors (speed, brake, throttle, etc.).
  2. TelemetryAnalyzer: The numerical core, using Numpy for high-speed analysis of telemetry features. This component crunches the numbers and calculates all the core metrics.
  3. DecisionEngine: This is the AI brain of the operation. We engineered a hybrid system.
    • First, a high-speed, rule-based heuristic engine (built with scikit-learn utilities) flags immediate, obvious performance deviations based on pre-calibrated thresholds.
    • Then, these structured metrics (e.g., {'avg_speed': 234.9, 'heavy_braking_count': 16}) are fed into the Google Gemini API. This provides a deeper, more nuanced layer of strategic, generative insight, allowing Hastorium to move beyond simple rules to provide human-like, contextual advice on why the speed is low and what specific driving style change will fix it.
  4. HastoriumApp: The central orchestrator that manages the entire pipeline, from data ingestion to numerical analysis to AI-driven summarization, and generates the final user-facing report.

We focused on a test-driven approach from the start, creating a comprehensive integration test suite (test_integration.py) to ensure every component was robust and reliable, culminating in a powerful Command-Line Interface (hastorium_cli.py) for immediate, real-world application.

Challenges we ran into

Our greatest challenge was velocity. With only 24 hours, we faced three major hurdles:

  1. Data Normalization: HPC systems output data in varied formats. A significant early challenge was engineering the HPCDataParser to be robust enough to ingest inconsistent formats and transform them into a single, standardized structure for the analysis pipeline.
  2. Prompt Engineering & Integration: Integrating the Gemini API was not just a simple API call. The core challenge was translating our rigid, numerical analysis (a JSON blob like {'avg_brake_pressure': 0.7, 'avg_corner_speed': 140}) into effective natural language prompts that would coax the generative AI into providing genuinely insightful and correct strategic advice, all while ensuring near-real-time responses.
  3. Calibrating the Heuristic Engine: Our baseline heuristic engine relies on hard-coded thresholds. We spent several intense hours debating and fine-tuning these metrics (e.g., is avg_speed < 250 the right trigger? Is speed_variance > 2000 too sensitive?) to ensure our AI's advice was genuinely insightful and not just noise, before we even handed it off to the generative layer for refinement.

Accomplishments that we're proud of

We are incredibly proud of building a complete, end-to-end, and production-ready AI system in just 24 hours. This isn't just a concept; it's a fully-functional and deployable tool, as evidenced by:

  • Comprehensive Documentation: We wrote a full README.md, technical DOCUMENTATION.md, and CONTRIBUTING.md guides.
  • A Feature-Complete CLI: The hastorium_cli.py is a real tool, supporting output to JSON (--json), decision-only summaries (--decisions-only), and saving full reports (--output results.json).
  • Multi-API Integration: We successfully integrated two different external APIs: Google Gemini for generative strategy and ElevenLabs for text-to-speech output (--tts-output), adding a unique, high-value feature.

Our biggest accomplishment is the synergy between our numerical analysis and the generative AI. We built a system that uses a high-speed analysis engine to find the "what" (e.g., "low cornering speed") and then leverages the Gemini API to brilliantly explain the "why" and "how to improve" in a way that is immediately understandable to a non-engineer.

What we learned

  1. We learned that a modular architecture (Parser, Analyzer, Engine) is the key to velocity. It allowed us to build and test the numerical pipeline and the AI integration in parallel.
  2. We gained a profound appreciation for the complexity of F1 telemetry. We learned that "good driving" is a delicate, multi-dimensional balance of conflicting metrics, aggressive throttle, smooth braking, and maintaining high cornering speed.
  3. Our biggest takeaway was the power of Generative AI as a "translator." The raw numerical analysis from our code is useless to a driver. The translation of that analysis into human-readable, prioritized, and motivational advice (the core function of our DecisionEngine's Gemini integration) is the most critical step in creating a truly useful AI tool.

What's next for Hastorium

This is just the beginning. Our 24-hour prototype is the foundation for a much more powerful system. Our own contribution guide outlines the immediate next steps:

  • Implement ML Models: Move beyond heuristics to implement machine learning models for predictive analysis.
  • Create a Web Dashboard: Build a real-time web dashboard for live visualization, piping the AI-generated insights directly to the driver's pit-wall engineers.
  • Expand Data Formats: Add support for additional high-performance data formats like Parquet and HDF5.
  • Historical Analysis: Implement lap comparison features and historical trend analysis to track driver performance over a whole season.

Built With

  • Python
  • Google Gemini API (for generative strategic insights)
  • ElevenLabs API (for text-to-speech audio summaries)
  • Numpy
  • Pandas
  • Scikit-learn
  • Requests (for API integrations)

Built With

Share this project:

Updates