Inspiration
Prompt engineering is complicated. It requires lots of trial and error while taking up precious development time.
What it does
PromptFlow streamlines prompt engineering for start-ups and small to mid-size companies using their customers data.
How we built it
PromptFlow is built with Ruby on Rails and Llama-3.3 and Gemma-9b. Ruby on Rails platform acts as a front-end and back-end server. Managing both user interactions and inter-platform communication. We use Llama-3.3 and Gemma-9b to optimize system prompts based on the user inputs collected by Ruby on Rails from any chat system. PromptFlow is a service that is built to be compatible with any chat system. To demonstrate this, we also made a sample chat system that a customer would use. The sample chat system we build it made with React, TypeScript, and Tailwind that leverages Groqcloud as our simple conversational chatbot API.
Challenges we ran into
Due to limitations on free AI models, we encountered rate limit issues and challenges with managing user input. Initially, we aimed to compile all collected data to generate a more refined system prompt, but we had to adjust our approach to process feedback to match the capabilities of our current AI model. Fortunately, our service is already equipped to handle these changes, and with access to a more advanced AI model, only minor tweaks would be needed to enhance its performance.
Accomplishments that we're proud of
We built two distinct products and had to seamlessly connect them while managing multiple API endpoints. At first, it was challenging—conflicts arose at the input/output interfaces and even during initialization. It felt like navigating a maze without a clear starting point. To tackle this, we systematically worked through each endpoint, ensuring compatibility one step at a time.
What we learned
Through this project, we learned the importance of adapting to constraints, whether it was working around API rate limits or optimizing input handling. Integrating multiple products and managing various endpoints reinforced the need for a structured approach to architecture and debugging. We also realized the power of iterative problem-solving—breaking down complex challenges into smaller, manageable tasks made them easier to tackle. Flexibility was key, as working with free AI models required us to rethink our approach and find creative solutions. Most importantly, we designed our system with scalability in mind, ensuring that future upgrades would require only minimal adjustments. These lessons will undoubtedly shape how we approach future projects.
What's next for PromptFlow
To optimize rate limits and improve data aggregation, we will structure and combine user inputs before making API calls, reducing redundancy and enhancing efficiency. By implementing batch processing, caching, and request scheduling, we can minimize unnecessary queries while ensuring smooth scalability for future AI model upgrades.
Built With
- css
- gemma-9b
- groqcloud
- html
- javascript
- llama-3.3
- ngrok
- python
- react
- ruby
- ruby-on-rails
- sentencetransformers
- tailwind
- typescript
Log in or sign up for Devpost to join the conversation.