Inspiration 📝

I built a RAG based chat assistant to help me quickly get a summary of what I am learning. In this example, I used the open source rust programming book as my dataset for getting contextualized responses from the book (https://github.com/rust-lang/book). In my last hackathon, I bounced between using the book and an LLM to help me develop my Rust project, but I often found some displacement between what I was learning from the book, and the responses I was getting from an LLM to help me build the project. In any case, learning from the book was a good start for me since I was new to the programming language, and I wanted to build this app as a supplement to make that process even easier.

What it does ⚙️

  • Conversational assistant that provides examples and summarizations using the open-source Rust programming book
  • Offers search filtering by chapters (represented as categories)
  • Uses TruLens to determine Context Relevance, and only provides contextually relevant responses

How we built it 🛠️

  • Snowflake Cortex => Cortex Search Service for fetching contextual data
  • Mistral LLM => Powering the response generation
  • Streamlit => Easy deployment for RAG based chat application
  • TruLens => Evaluating the context

Challenges we ran into 😩

  • Category classification => initially Snowflake Functions were used, but the categories weren't being mapped correctly. So, a user defined function (UDF) was used instead for a deterministic approach.
  • Setting up the python environment => Most of the development was done using Snowflake Notebooks, but eventually we got a python environment setup and that helped test specific features of the RAG application (Cortex Search, TruLens, Streamlit) and put them all together to have a fully built application

Accomplishments that we're proud of 😎

  • Building this app was a challenge, especially during the holidays. But I'm grateful to have this app deployed on Streamlit, and putting together the video was a fun experience
  • Incorporating TruLens, and seeing the role it plays in the RAG triad

What we learned 🙌🏽

  • What is a RAG? A rag is a process that enhances responses from LLMs by providing them with context. This helps reduce hallucinations from LLMs, by helping the user get relevant information for their queries.
  • Setting up a conda environment for building a python project
  • Using TruLens to filter contextually relevant data

What's next for RAG Based Chat Assistant for Rust

  • Cleanup the staging database table before the chunking process

Built With

  • mistral
  • python
  • snowflake
  • streamlit
  • trulens
Share this project:

Updates