Navigable AI

v1.3.1 – Expanded Agent Creation Options

🚀 What’s New

You can now create agents using standard foundational models — no fine-tuning required.

Previously, Navigable AI supported only fine-tuned models (optionally enhanced with RAG grounding). With this release, teams can choose from three flexible training modes when creating an AI assistant:

  1. Fine-Tuned Agents – deeply customized and trained for your product.

  2. RAG Agents – retrieval-augmented for real-time, grounded responses.

  3. Simple (Foundational) Agents – quick setup using base models with custom prompts.

This update gives you the freedom to start fast, scale intelligently, and fine-tune when ready, making it easier to match your AI’s complexity with your use case.

👉 Learn more about the available training modes here:
Training Overview Documentation

💰 Cost Estimator Tool

We’ve added a new Cost Estimator to help you plan your AI strategy effectively.
Quickly compare pricing across training modes — Fine-Tuned, RAG, and Foundational — to choose the right balance between performance and budget.
👉 Try the pricing estimator: https://www.navigable.ai/pricing/

💡 Why It Matters
  • Faster onboarding: Launch assistants in minutes without needing training data.

  • Cost-efficient experimentation: Test conversational flows and prompts before committing to fine-tuning.

  • Scalable architecture: Move seamlessly from foundational to fine-tuned or RAG-enhanced setups as your needs evolve.

🛠 Improvements & Fixes
  • Streamlined agent creation interface with clearer setup steps.

  • Improved backend support for hybrid (Fine-tuned + RAG) configurations.

  • Minor performance optimizations and bug fixes across the dashboard.

✨ Try It Out

Head to your Navigable AI dashboard and create your next assistant with a foundational model — it’s the fastest way to get started.

We’d love your feedback as we continue expanding how teams train and deploy AI on Navigable.

v1.3.0 – Grounded Responses with RAG (Retrieval-Augmented Generation)

This release introduces a game-changing feature that dramatically improves the accuracy and trustworthiness of your assistant’s responses:

✨ Grounded Responses with RAG

You can now enable Retrieval-Augmented Generation (RAG) to ground responses in your project’s Q&A knowledge—boosting accuracy, reducing hallucinations, and making answers more context-aware and verifiable.

Key Highlights:

Why This Matters:

This makes Navigable the only platform where you can combine fine-tuning and RAG to create highly accurate, domain-specific AI agents. Other platforms don’t disclose or deliver accuracy metrics—we do.

This is our most impactful release yet, and we’re incredibly excited to put this level of control and reliability into your hands.

v1.2.3 – Import/Export Knowledge, Link‑Parsing Status, and Key Fixes

✨ New Features

Import & Export Knowledge (CSV)
Upload or download Q & A sets as CSV files for quick review or bulk edits.

Link‑Parsing Status Dashboard
Data-Sources -> Links tab now displays total links along with any pending links that are added for parsing.

🐛 Bug Fixes & UX Tweaks

Playground – Accurate Credit Error
Replaced the generic “Unexpected error” with a clear “Organization out of credits” message.

Browser Back Navigation
Page & tab state now persist when using the browser’s Back button.

Training Credits Deduction
Credits now deduct immediately after each training job finishes.

Playground Export Metadata
Fixed an issue where user’s email address was missing from chat export metadata.

v1.2.2 – Guided Onboarding & Dark Theme

We're excited to introduce enhancements that make Navigable more intuitive and visually appealing:

🧭 Guided Onboarding Panel

  • A new collapsible panel on the right side of the app provides contextual help and a step-by-step timeline to guide users through the entire fine-tuning process.

  • Each step is clickable, directing users to the corresponding page, with "Previous" and "Next" buttons for easy navigation.

🌙 Dark Theme

  • A sleek dark theme is now available by default across the app and website.

  • Users can switch between dark and light themes in Account → Settings → Dark Theme.

🛠️ UI Enhancements
We've revamped multiple screens to ensure consistency across the application:

  • Standardized CRUD operations, search, and pagination flows.

  • This uniformity allows users to learn a single flow and apply it throughout the app, enhancing usability and reducing the learning curve.

These updates aim to simplify the onboarding process for newcomers and provide a more comfortable visual experience for all users, and are part of our ongoing commitment to improving user experience and making Navigable more accessible to everyone.

v1.2.1 – Export Playground Chat Session

We’ve added a highly requested feature to make sharing easier:

🆕 Export Playground Chat Session

  • You can now export any playground chat session as a .txt file.

  • This makes it easy to share chats with teammates or archive them for future reference.

  • Exported files include helpful metadata:

    • Project name

    • Version

    • Model used

    • User/session owner

    • System prompt

    • Full message history

This feature improves collaboration and makes it easier to track and debug fine-tuning workflows.

v1.2.0 - GPT 4.1 Mini Support + Sort & Filter for Model Evaluations

We’ve added two major planned features to improve your fine-tuning workflow:

1. GPT 4.1 Mini Support

  • Now supporting OpenAI’s GPT 4.1 Mini model for fine-tuning and usage.

  • GPT 4.1 Mini delivers near GPT-4o level performance and is approximately 70% cheaper for fine-tuned models.

  • It provides noticeably better fine-tuned results compared to GPT-4o Mini models.

  • Fine-tuned GPT 4.1 Mini models are achieving over 93% average accuracy, with some exceeding 94%, a major improvement over the ~80% average seen with fine-tuned GPT-4o Mini models.

  • Note: Requests with GPT 4.1 Mini cost about 3x more than GPT 4o Mini.

2. Sort & Filter for Model Evaluations

  • Added sorting and filtering capabilities for evaluation results based on scores.

  • Users can now sort evaluated responses by different metrics: similarity, relevance, completeness, error handling, and hallucinations, in both ascending and descending order.

  • This makes it much easier to identify incorrectly answered questions and prioritize them for re-training, significantly improving the fine-tuning experience.