Inspiration

As students on the cusp of graduating, we have become quite familiar with learning management systems and their pitfalls over the years. More importantly, we have seen first-hand the disproportionate number of students needing help with coursework and the number of available teaching assistants. Drawing from this frustration, we felt that offloading some of our TAs' grunt work to an LLM teaching assistant - a GPTa - would be benficial.

What it does

GPT-LMS is a concept learning management system that leverages the Azure OpenAI Service for seamless LLM features. Every created course comes with it's own GPT Teaching Assistant, or a GPTa. Students can redirect questions to GPTa over a human TA, reducing reliance on human TAs for menial tasks. Human TAs can also rely on the GPTa to generate and publish quizzes from a source with simple prompts.

How we built it

By generating and storing vector embeddings for all course content, we will be able to perform a similarity search with the user's query to provide our model with relevant documents from the course. This will result in context-aware answers as the model can source answers from material unique to the course as taught by the instructors. By engineering prompts using LangChain, we can also allow GPTa to output answers following a specific format, allowing GPTa to interact with the LMS' quiz creation endpoints.

Challenges we ran into

Outside of prior experience with React, every single technology used in this project was something we were unfamiliar with. This become especially challenging given the rapidly changing state of Next.js and frameworks such as LangChain. We discovered that LangChain.js lacked the ability to load documents from Azure Containers, or to use CosmosDB for Mongo vCore as a vector store, causing us to have to find an alternate implementation. Admittedly, it is not too challenging to contribute to LangChain.js to provide this feature, as it already exists in it's Python version - although time was an issue. We also did not have the foresight to set up a deployment pipeline until we were half way through. It was at this point that we realized we could not deploy our Next.js app to Azure Static Webapps using the new app router, prompting a rewrite using the pages router.

What we are proud to have learnt

This was a very illuminating introduction to developing with AI services, and building cloud-native applications. We also gained immense experience with project management, especially seeing the benefits of having a CI/CD pipeline with allowing us to preview a deployment for any breaking changes.

What's next for GPT-LMS

Our immediate plans would be to contribute to LangChain.js to provide it's missing features discussed above. GPT-LMS could potentially be rewritten to work the right way this time around using our lessons learned from this project, and be made open source to allow individuals to host their own LMS.

Built With

  • azure-cosmosdb-for-postgresql
  • azure-openai-service
  • azure-static-web-apps
  • langchain
  • next.js
  • pinecone
Share this project:

Updates