Inspiration

We have always wanted to build a startup. So naturally, we started paying attention to how startups actually get funded. We watched interviews, sat through VC talks, and read through how accelerators like Y Combinator and Techstars actually make their decisions. What we kept hearing was the same problem: too many decks, not enough time. Good startups get passed over not because the ideas are bad, but because the process is slow, manual, and expensive. That bottleneck is what Lito.ai is built to fix.

What It Does

Lito.ai is an AI analyst that digs deep into startup pitch decks in seconds. It reads static PDFs, verifies the data inside them, and turns everything into a clear, structured picture. No prompting, no hand-holding. Just drop in a pitch deck and walk away with the analysis that would have taken a VC team hours to put together.

Multimodal Vision Parsing: Powered by Amazon Nova Lite, Lito.ai doesn't just scan text on a slide. It actually looks at each page the way a human would. That means it can tell the difference between a real team member's name and a dummy placeholder sitting inside a UI mockup or mobile screenshot, something a standard text parser would get wrong every time.

Autonomous Research Loop: Once the deck is read, Lito.ai doesn't stop there. It triggers Amazon Nova agents to go out to the live web using built-in grounding tools and starts checking everything, founder backgrounds, revenue claims, market size, and the competitive landscape, all in real time, without anyone telling it where to look.

Interactive Due Diligence: Most tools hand you a report and leave you with it. Lito.ai works differently. Through a Live Analyst Chat, VCs can push back on the data directly. Ask it to dig deeper into a specific risk, question a claim, or re-verify a number, and the agents go back out, run new searches, and update the dashboard on the spot.

The Trust-First Index (TFI): Every startup analyzed gets a TFI score. It's a proprietary scoring system that benchmarks each startup against real-world VC metrics and automatically applies penalties for anything that couldn't be verified or was disputed during research. Instead of reading through everything yourself, analysts know exactly where to focus and which opportunities are worth their time.

BENCHMARKS AND SCORING RESEARCH Research confirmed that accelerators value customer validation and accurate competitor identification most. All Lito benchmarks are verified by industry leaders like Carta, 1st Round Capital, and PitchBook, with 2026 Q1 data updated quarterly for total transparency.

The Performance Index measures a startup’s standing against 2026 Q1 industry averages. Our agentic logic uses a weighted composite audit to place startups on a bell curve, identifying them as either a “Hidden Gem” or market-average. Lito prioritizes Veracity Alpha and Market Defensibility. By finding "Deep Value over Surface Polish," we identify "Technical Geniuses" who may have poor pitch decks but high code velocity. This "Outlier Logic" allows Lito to separate the Performance Score from the Risk Profile, flagging a startup that may look risky but possesses top-1% technical veracity.


How We Built It

The Lito.ai platform is a multi-agent orchestration pipeline deployed entirely on AWS, built so that each agent has one clear job and hands off cleanly to the next.

Orchestration: AWS Step Functions manages the entire pipeline, coordinating the sequence of six specialized agents, handling errors between steps, and making sure data persists correctly throughout the analysis.

The Brains: We used the Amazon Nova model family through the Bedrock Converse API. Nova Pro handles high-level reasoning and complex analysis, while Nova Lite handles the fast, image-heavy work of reading slides visually and running web grounding.

The Agents: Each analyst in the pipeline is an independent AWS Lambda function with a specific role and its own set of instructions. This keeps the system modular, meaning any single agent can be updated or improved without touching the rest of the pipeline.

Data and State: The frontend is built on Next.js and syncs in real time with Amazon DynamoDB and S3. Users can watch the agents work as it happens, seeing the analysis build out live rather than waiting for a final result to appear.

RESEARCH AND PROBLEM SPACE The Problem: Accelerators suffer from “Document Fatigue” and “Heuristic Bias,” often overlooking “Hidden Gems” due to manual review limits. User Research Insight: Interviews revealed that accelerators prioritize subjective factors like founder personality, noting: “It’s easier to change ideas and mentor the founder than to find a new founder.” Consequently, we integrated this into our scoring rubric. When an accelerator reviews a memo, Lito identifies missing data or red flags and generates follow-up questions for the founder. It also drafts acceptance or rejection emails—including specific decision logic—which improves applicant transparency, outcomes, and legal defensibility.

The Gap: Current AI tools act as “Black Boxes.” Users fear legal liability from automated rejections and a lack of data provenance. Problems Identified:

  • Isolated apps performing fragmented tasks, causing app fatigue.
  • Difficulty providing constructive feedback to every interviewee.
  • Legal liability concerns from investors.
  • Data privacy and security risks.
  • Issues arising from subjective, inconsistent scoring.
  • "Black Box" AI that lacks the transparency required for due diligence.
  • General AI lacking specific industry knowledge. Key Solutions:
  • Increased accuracy in discovering “Hidden Gems”—identifying high-potential startups missed by standard “checkbox” scanning.
  • Automated follow-up questions and drafted rejection/acceptance emails with Specific Reason-Coding to reduce liability and increase transparency.
  • A permanent "Paper Trail" for due diligence to protect against future audits or lawsuits.
  • Standardized, streamlined application processes that increase efficiency for both teams and applicants.
  • Transparent selection criteria to reduce subjectivity, maintain evaluation consistency, and enable scalable automation.
  • An integrated platform that acts as the "Operating System" for accelerator workflows, enabling seamless data sharing and management from application to exit.
  • Human-in-the-Loop Overrides: A “Review & Override” button for every AI verdict, ensuring humans maintain final authority.
  • Prioritized “Speed-to-Lead”: Capturing the high-volume metric accelerators care most about (e.g., a "2h 14m" response time) to turn AI hype into tangible ROI.
  • Zero-Onboarding Architecture: Unlike competitors who require specialized training or long onboarding cycles, Lito is designed for immediate "Plug-and-Play" utility. By utilizing Contextual Tooltips and an intuitive dashboard, we've created a completely independent user experience that allows an analyst to perform a forensic audit on day one without a manual.

The Architecture

The Lito.ai architecture is defined by a 6-Agent System Pipeline that ensures every claim is cross-referenced and verified before it reaches the human analyst.

lito-ai architecture diagram

Agent 1, Extraction (The Eyes): Nova Lite renders each PDF page as an image and extracts the content visually. It flags UI mockups and screenshots so their contents don't get mistaken for real company data, preventing hallucinations from the start.

Agent 2, Claim Filtering (The Strategist): Takes the raw extracted data and organizes it into specific, verifiable VC claims, revenue figures, growth metrics, cap table terms, so the next agents know exactly what to go and check.

Agents 0 & 3, Grounding and Fact-Check (The Detectives): These two agents use Amazon Nova Web Grounding to research the company on the live web and cross-reference every claim against what's actually out there. They verify the founders, the numbers, and the competitive context before anything gets scored.

Agent 4, TFI Scoring (The Judge): Takes the verified findings and scores the startup against real VC benchmarks. Any claim that couldn't be verified or was disputed gets a scoring penalty, giving the analyst a clear signal of where the risks are.

Agent 5, Memo Generation (The Writer): Compiles everything into a clean, structured investment memo ready for the analyst to review, complete with sourced findings and the TFI score.

Step Functions

Human-in-the-Loop Chat: After the pipeline completes, the analyst isn't locked out of the process. The Live Analyst Chat lets users interact with the data directly, re-triggering specific agents to refine findings, run additional searches, and update the system state in real time.

DESIGN OF PRODUCT FEATURES To eliminate the “black box AI” concerns common in 6-agent pipelines, we designed a Global Progress Bar and a scrollable live feed to show exactly which agent is processing each document. This visual feedback prevents users from refreshing the page—thinking it’s frozen—which saves on API costs. The dashboard visualizes core user research, answering the most critical accelerator questions: which startups require immediate action, which are projected for success based on historical data, and how the current portfolio compares to industry benchmarks. Table-based workflow controls allow for quick filtering by top scores, sector leaders, or "Hidden Gems." Agents provide data across 6 scores: Data Veracity (Truth), AI Confidence (Certainty), Data Integrity (Evidence), Benchmark Score (Overall Performance), Market Defensibility (Moat), and Risk Profile (Vulnerabilities).


Challenges We Ran Into

Getting the Analysis Accurate and Consistent: The hardest problem we faced was getting lito.ai to produce reports that were both accurate and reliable across different pitch decks. Amazon Nova would sometimes hallucinate chart values, misread visual data, or produce inconsistent outputs depending on how a deck was structured. Getting the pipeline to a point where the extracted claims, verdicts, and TFI scores could actually be trusted required multiple rounds of prompt engineering, testing, and iteration across every agent in the pipeline.

Getting the Analysis Accurate and Consistent:Amazon Bedrock's content filters were occasionally blocking legitimate founder and company information during analysis. The fix wasn't just prompt adjustments. We had to build a dedicated agent whose sole job was to research the company separately using just the company name, working around the filters without compromising the quality of the analysis.

Web Search Integration: You.com was blocking our requests because it detected bot-like behavior, multiple rapid API calls coming from AWS Lambda functions during testing. Rather than trying to work around it, we switched to Amazon Nova's built-in web grounding through the latest version of boto3, which solved the problem and kept everything natively within the AWS ecosystem.

Accomplishments We're Proud Of

Building a Full Pipeline That Actually Works: Getting five independent agents to run in sequence, each handing off cleanly to the next, and producing a result that is accurate enough to trust was not a given. Seeing it work end to end for the first time was a real moment for us.

Using AWS to Solve a Real Problem: Every service we used, from Bedrock to Step Functions to Lambda, was chosen because it solved a specific problem in the pipeline, not just because it was available. Being able to navigate the AWS ecosystem and make it all work together in a weekend was something we are genuinely proud of.

What Our Team Built Together: Leon handled the entire technical build while Anna drove the research and design. Two people, two very different skill sets, one coherent product. That does not happen without good communication and a lot of trust in each other.

What We Learned

Multi-Agent Orchestration Is a Different Kind of Problem: Building with AI agents isn't just about writing good prompts. Each agent has its own failure points, and when they run in sequence, a problem in one cascades into the next. We learned that building a reliable multi-agent pipeline is less about the architecture and more about how many times you're willing to break it and rebuild it.

Stay Close to the Ecosystem You're Building In: We ran into three separate situations where a third party tool broke or required a workaround mid-build. Every time, the solution ended up being to lean further into AWS natively rather than pulling in something external. The more we stayed within the ecosystem, the more stable and predictable the build became.

What a Strong Researcher and Designer Actually Changes: This was my first time I (Leon) worked with a dedicated designer and researcher, and it made a real difference. Having Anna focused on the problem space and the user experience while the engineering was running in parallel kept both sides from getting neglected. Navigating communication between the two roles was its own challenge, but we figured it out as we went. It made the whole build more focused and honestly a lot more fun.

What's Next

  1. NEXT STEPS Lito aims to become the "Brain to the Body" of accelerator workflows through seamless stack integration.

*CRM/Workflow Integration: Reminders for document requests and bi-directional sync with Notion. We plan to build sync with platform like Notion so every score, verdict, and red flag flows directly into the accelerator's existing pipeline without any manual entry.

  • Scalability: Expanding multi-agent verification from Seed stage to Series A, B, and C imports to keep all data in a single source of truth.

  • Social Integration: Slackbot functionality allowing teams to @Lito for instant metric retrieval during internal discussions. Real-time alerts delivered where the team already works. High scoring deck just came in, red flag detected, founder resubmitted, the team knows instantly without having to open the dashboard.

  • Security & Compliance: Building toward SOC2 certification to ensure data never leaves the organization and confirming that customer data is never used for model training.

  • Interactive Reporting: PDF exports will be watermarked, immutable, and timestamped for audit trails.

  • Real-time Data: Implementing WebSockets for live updates and clickable score drawers that reveal exact source references.

  • Adoption: Implementing a "Free Trial" model (first 5 uploads free) and providing comprehensive tooltips for an independent, "no-onboarding" user experience.

  • Auditability: Developing more detailed audit logs for every agent’s specific logic and source findings.

lito.ai's core pipeline is built. Further iterations aim to include several new features, including:

Model Training: As Lito.ai processes more pitch decks over time, it builds a dataset no one else has, thousands of screened startups with verified claims, TFI scores, and real investment outcomes. The long term goal is to train a purpose-built model on that data, one that gets sharper at spotting red flags, predicting fundability, and identifying the kind of early signals that separate breakout companies from the rest of the batch.

Complete Design: The current interface gets the job done, but we want to bring the full vision to life. A polished, production-ready dashboard that makes navigating a batch of 500 decks feel effortless.

Automated Email: Once a verdict is reached, lito.ai will draft and send personalized feedback emails to founders, whether that's a rejection with specific notes pulled from the analysis or a next steps request for promising applications.

Legal Audit Agent: The next major addition to the pipeline. A dedicated agent that goes beyond web verification and cross-references what a founder claims in their pitch deck against their actual legal documents, cap tables, and contracts, catching discrepancies before anything reaches an investment committee.

Built With

Share this project:

Updates