Inspiration
We initially wanted to create a code simplifier in Java but were not sure if it would be possible to create in the short time, so we pivoted our idea to a project where programmers can ask the app some question and get a response. That is when we discovered Ollama, and decided to integrate it in our project to build an offline AI assistant tool.
What it does
An offline AI assistant that allows users to enter a question and receive a generated response through a graphical user interface (GUI).
How we built it
The application uses Java Swing for the GUI and Ollama to run language models locally without requiring an internet connection. The application is built entirely using Java.
Challenges we ran into
We ran into the challenge of Ollama being stateless meaning that it does not remember the answers from previous questions which made it less flexible. To resolve this we built a class to store the previous prompts. Other challenges were mainly mainly connecting the model's output with the UI logic and refining the response generated the Ollama.
Accomplishments that we're proud of
We are proud that our app is complete and works how we initially planned.
What we learned
We learned how to use the Java Swing framework to create the GUI and format user interactions. We also learned how to connect a Java application to a local AI model using Ollama, stream responses from the Llama 3.1, and handle JSON data using the org.json library.
What's next for Offline AI Assistant
We plan to add more features to improve the flexibility of our app, such as storing the history of previous prompts and responses, find ways to get the response to be generated faster, etc.
Log in or sign up for Devpost to join the conversation.