Intel will not provide or guarantee development of or support for this project, including but not limited to, maintenance, bug fixes, new releases or updates.
Patches to this project are no longer accepted by Intel.
If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the community, please create your own fork of the project.
📍 Installation • 🚀 Components • 📚 Examples • 🚗 Getting Started • 💊 Demos • ✏️ Scripts • 📊 Benchmarks
fastRAG is a research framework for efficient and optimized retrieval augmented generative pipelines, incorporating state-of-the-art LLMs and Information Retrieval. fastRAG is designed to empower researchers and developers with a comprehensive tool-set for advancing retrieval augmented generation.
Comments, suggestions, issues and pull-requests are welcomed! ❤️
Important
Now compatible with Haystack v2+. Please report any possible issues you find.
- 2024-05: fastRAG V3 is Haystack 2.0 compatible 🔥
- 2023-12: Gaudi2 and ONNX runtime support; Optimized Embedding models; Multi-modality and Chat demos; REPLUG text generation.
- 2023-06: ColBERT index modification: adding/removing documents; see IndexUpdater.
- 2023-05: RAG with LLM and dynamic prompt synthesis example.
- 2023-04: Qdrant
DocumentStoresupport.
- Optimized RAG: Build RAG pipelines with SOTA efficient components for greater compute efficiency.
- Optimized for Intel Hardware: Leverage Intel extensions for PyTorch (IPEX), 🤗 Optimum Intel and 🤗 Optimum-Habana for running as optimal as possible on Intel® Xeon® Processors and Intel® Gaudi® AI accelerators.
- Customizable: fastRAG is built using Haystack and HuggingFace. All of fastRAG's components are 100% Haystack compatible.
For a brief overview of the various unique components in fastRAG refer to the Components Overview page.
| LLM Backends | |
| Intel Gaudi Accelerators | Running LLMs on Gaudi 2 |
| ONNX Runtime | Running LLMs with optimized ONNX-runtime |
| OpenVINO | Running quantized LLMs using OpenVINO |
| Llama-CPP | Running RAG Pipelines with LLMs on a Llama CPP backend |
| Optimized Components | |
| Embedders | Optimized int8 bi-encoders |
| Rankers | Optimized/sparse cross-encoders |
| RAG-efficient Components | |
| ColBERT | Token-based late interaction |
| Fusion-in-Decoder (FiD) | Generative multi-document encoder-decoder |
| REPLUG | Improved multi-document decoder |
| PLAID | Incredibly efficient indexing engine |
Preliminary requirements:
- Python 3.8 or higher.
- PyTorch 2.0 or higher.
To set up the software, install from pip or clone the project for the bleeding-edge updates. Run the following, preferably in a newly created virtual environment:
pip install fastragThere are additional dependencies that you can install based on your specific usage of fastRAG:
# Additional engines/components
pip install fastrag[intel] # Intel optimized backend [Optimum-intel, IPEX]
pip install fastrag[openvino] # Intel optimized backend using OpenVINO
pip install fastrag[elastic] # Support for ElasticSearch store
pip install fastrag[qdrant] # Support for Qdrant store
pip install fastrag[colbert] # Support for ColBERT+PLAID; requires FAISS
pip install fastrag[faiss-cpu] # CPU-based Faiss library
pip install fastrag[faiss-gpu] # GPU-based Faiss libraryTo work with the latest version of fastRAG, you can install it using the following command:
pip install .pip install .[dev]The code is licensed under the Apache 2.0 License.
This is not an official Intel product.