Skip to content

NVIDIA-AI-Blueprints/aiq

 
 

NVIDIA AI-Q Blueprint

⚠️ IMPORTANT – Active Development Branch

You are currently viewing the develop branch for the pre-release version of AI-Q v2.0.

This branch contains the latest features and experimental updates and may contain breaking changes.

For production use, switch to the v1.2.1 stable release on the main branch.

Table of Contents

Overview

The NVIDIA AI-Q Blueprint is an enterprise-grade research agent built on the NVIDIA NeMo Agent Toolkit. It gives you both quick, cited answers and in-depth, report-style research in one system, with benchmarks and evaluation harnesses so you can measure quality and improve over time.

AI-Q Architecture

Key features:

  • Orchestration node — One node classifies intent (meta vs. research), produces meta responses (for example, greetings, capabilities), and sets research depth (shallow vs. deep).
  • Shallow research — Bounded, faster researcher with tool-calling and source citation.
  • Deep research — Long-running multi-step planning and research to generate a long-form citation-backed report.
  • Workflow configuration — YAML configs define agents, tools, LLMs, and routing behavior so you can tune workflows without code changes.
  • Modular workflows — All agents (orchestration node, shallow researcher, deep researcher, clarifier) are composable; each can run standalone or as part of the full pipeline.
  • Evaluation harnesses — Built-in benchmarks (for example, FreshQA, DeepResearch) and evaluation scripts to measure quality and iterate on prompts and agent architecture.
  • Frontend options — Run through CLI, web UI, or async jobs; the Getting started and Ways to run the agents.
  • Deployment options - Deployment assets for a docker compose as well as helm deployment.

Software Components

The following are used by this project:

Target Audience

This project is for:

  • AI researchers and developers: People building or extending agentic research workflows
  • Enterprise teams: Organizations needing tool-augmented research with citation-backed research
  • NeMo Agent Toolkit users: Developers looking to understand advanced multi-agent patterns

Prerequisites

  • Python 3.11–3.13
  • uv package manager
  • NVIDIA API key from NVIDIA AI (for NIM models)
  • Node.js 22+ and npm (optional, for web UI mode)

Optional requirements:

  • Tavily API key (for web search functionality)
  • Serper API key (for academic paper search functionality)

Note: Configure at least one data source (Tavily web search, Serper search tool, or knowledge layer) to enable research functionality.

If these optional API keys are not provided, the agent continues to operate without the corresponding search capabilities. Refer to Obtain API Keys for details.

Hardware Requirements

Generalized minimum requirements.

Local Development

  • Typical developer machine for AI-Q workflow (no GPU required)
  • Llamaindex (no GPU required)
  • Self / Remote Hosted Models

Self Hosted

Remote Hosted

Architecture

AI-Q uses a LangGraph-based state machine with the following key components:

  • Orchestration node: Classifies intent (meta vs. research), produces meta responses when needed, and sets depth (shallow vs. deep) in one step
  • Shallow research agent: Bounded tool-augmented research optimized for speed
  • Deep research agent: Multi-phase research with planning, iteration, and citation management

Each agent can be run individually or as part of the orchestrated workflow. For detailed architecture documentation, refer to Architecture.

Getting Started

Clone the Repository

git clone https://github.com/NVIDIA-AI-Blueprints/aiq.git && cd aiq

Automated Setup

Run the setup script to initialize the environment:

./scripts/setup.sh

This script:

  • Creates a Python virtual environment with uv
  • Installs all Python dependencies (core, frontends, benchmarks, data sources)
  • Installs UI dependencies (if Node.js is available)

Manual Installation

For selective installation, install packages individually:

# Create and activate virtual environment
uv venv --python 3.13 .venv
source .venv/bin/activate

# Install core with development dependencies
uv pip install -e ".[dev]"

# Install frontends (pick what you need)
uv pip install -e ./frontends/cli          # CLI frontend
uv pip install -e ./frontends/debug        # Debug console
uv pip install -e ./frontends/aiq_api      # Unified API (includes debug)

# Install benchmarks (pick what you need)
uv pip install -e ./frontends/benchmarks/deepresearch_bench
uv pip install -e ./frontends/benchmarks/freshqa

# Install data sources (pick what you need)
uv pip install -e ./sources/tavily_web_search
uv pip install -e ./sources/google_scholar_paper_search
uv pip install -e "./sources/knowledge_layer[llamaindex,foundational_rag]"

Obtain API Keys

API Environment Variable Purpose Required
NVIDIA API NVIDIA_API_KEY LLM inference through NIM Yes
Tavily TAVILY_API_KEY Web search No (if not specified, agent continues without web search)
Serper SERPER_API_KEY Academic paper search No (if not specified, agent continues without paper search)

Obtain an NVIDIA API Key

  1. Sign in to NVIDIA Build
  2. Click on any model, then select "Deploy" > "Get API Key" > "Generate Key"

Obtain a Tavily API Key

  1. Sign in to Tavily
  2. Navigate to your dashboard
  3. Generate an API key

Obtain a Serper API Key

  1. Sign in to Serper
  2. Generate an API key from your dashboard

Set Up Environment Variables

Create a .env file in deploy/ directory:

cp deploy/.env.example deploy/.env

Replace your API keys.

Note: If you do not want to use paper search, follow the steps in the Customization guide to disable it.

Ways to Run the Agents

The frontends/ directory contains different interfaces for interacting with the agents. You can also run agents directly through the NeMo Agent Toolkit CLI.

Command-line interface (CLI)

The CLI provides an interactive research assistant in your terminal:

# Activate the virtual environment
source .venv/bin/activate

# Run with the convenience script
./scripts/start_cli.sh

# Verbose logging
./scripts/start_cli.sh --verbose

# Or run directly with the NeMo Agent Toolkit CLI
nat run --config_file configs/config_cli_default.yml --input "How do I install CUDA?"

The CLI frontend source is in frontends/cli/.

Web UI

For a full web-based experience:

./scripts/start_e2e.sh

This starts:

  • Backend API server at http://localhost:8000
  • Frontend UI at http://localhost:3000

The web UI source is in frontends/ui/. Refer to frontends/ui/README.md for more details.

Web UI with Docker Compose

You can also run the backend and UI with Docker Compose:

cd deploy/compose

# No-auth local setup (LlamaIndex default)
docker compose --env-file ../.env -f docker-compose.yaml up -d --build

# To select a different backend config, set BACKEND_CONFIG in deploy/.env, for example:
# BACKEND_CONFIG=/app/configs/config_web_frag.yml

For more details, refer to:

  • deploy/compose/README.md

Async Deep Research Jobs

Endpoints, SSE streaming, and debug console: refer to frontends/aiq_api/README.md.

Benchmarks

To run agents in evaluation mode, refer to the Evaluating the Workflow section.

Jupyter Notebooks

The docs/notebooks/ directory contains a three-part series that walks through the blueprint from first run to full customization. Run them in order:

# Notebook What it covers Prerequisites
0 Getting Started with AI-Q Full blueprint overview — environment setup, orchestrated workflow (intent routing, shallow and deep research), and Docker Compose deployment NVIDIA_API_KEY; optionally TAVILY_API_KEY, SERPER_API_KEY
1 Deep Researcher — Web Search Deep researcher in depth — Python API, nat run, and end-to-end evaluation against the DeepResearch Bench with nat eval Notebook 0 completed; NVIDIA_API_KEY, TAVILY_API_KEY, SERPER_API_KEY; OpenAI or Gemini key for the judge model
2 Deep Researcher — Customization Extending the deep researcher — adding paper search, assigning different LLMs per agent role, editing prompts, and enabling the knowledge layer Notebooks 0 and 1 completed; NVIDIA_API_KEY, TAVILY_API_KEY, SERPER_API_KEY

Evaluating the Workflow

The frontends/benchmarks/ directory contains evaluation pipelines for assessing agent performance.

Available Benchmarks

Benchmark Description Location
Deep Research Bench RACE and FACT evaluation for research quality frontends/benchmarks/deepresearch_bench/
FreshQA Factuality evaluation on time-sensitive questions frontends/benchmarks/freshqa/

Running Evaluations

First, install the benchmark package:

uv pip install -e ./frontends/benchmarks/deepresearch_bench

Download the dataset files:

python frontends/benchmarks/deepresearch_bench/scripts/download_drb_dataset.py

Then run the evaluation with one of the available configurations:

dotenv -f deploy/.env run nat eval --config_file frontends/benchmarks/deepresearch_bench/configs/config_deep_research_bench.yml

For detailed benchmark documentation, refer to:

Development

For development, contribution, and documentation, refer to:

License

This project will download and install additional third-party open source software projects. Review the license terms of these open source projects before use, found in LICENSE-THIRD-PARTY.

GOVERNING TERMS: AIQ blueprint software and materials are governed by the Apache License, Version 2.0

About

The AI-Q NVIDIA Blueprint is an open reference example for building intelligent AI agents that connect to your enterprise data, reason using state-of-the-art models, and deliver trusted business insights.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors