-
-
Keep track of release progress of your application
-
Generate release notes, technical details or blogpost for version release
-
Easily publish generated content on your Confluence or just copy it to clipboard
-
Choose tasks and issue types to generate content
-
Keep track of the generated content for versions
-
Dark theme is also supported!
-
Choose from multiple AI backends and models
Inspiration and what it does
Our inspiration came from our streamlined software release process. With an automated CI/CD system that considers various technical aspects, we saw an opportunity to take it a step further, by leveraging the latest advancements in AI tools. Release Assistant makes it easier to keep track of the version release process. During release, it will do the heavy lifting in terms of generating helpful content, like:
- Release Notes - in the form of a brief summary with new features, improvements and bugfixes. Ready to publish on the marketplace!
- Technical Details - more technical in-depth summary of what was done in a given version, especially useful for internal teams (developers, QA, management)
- Blogpost - user- and SEO-friendly blogpost with all the cool stuff from your release! Easily publishable on Confluence with not much than a single click Each generated content can be easily copied to a clipboard or published on Confluence using dedicated buttons. When publishing to Confluence there is an ability to edit the AI-generated text to best suit your needs.
Additionally, you can track the progress of a version and associated tasks using the Version progress tab, where each TODO/In Progress/Done task will be displayed in the form of an easily readable timeline. This way you can visually inspect how much work is already done and how much time there is left to the release. It's also helpful for inspecting already released versions, and checking how much time each task took to complete.
AI options
Currently, we provide a few AI services to choose from for data generation:
- OpenAI - original OpenAI ChatGPT using the 3.5-turbo model
- Azure OpenAI - OpenAI ChatGPT running on Azure infrastructure with ChartGPT model 3.5-turbo
- Azure OpenAI GPT-4.0 - OpenAI ChatGPT running on Azure infrastructure with ChatGPT model 4.0
We also have support for the GPT4All REST webservices, so soon users will be able to use their own private AI backends such as (but not limited to) Orca and Llama2!
How we built it (and other technical challenges and accomplishments)
Before proceeding with the development of our app, we took a long process of researching the current state of AI options available and whether our goal was even achievable. We tested a couple of popular AI language models and came to the conclusion that ChatGPT would be suitable for our case. There was also an option to run a private instance of this model using the open-source GPT4All, which can run pre-trained models without the need for powerful GPUs. However, at that time we were working with the 25-second-long timeout imposed by Forge, a limit that was too small for response times from GPT4All and even OpenAI/Azure OpenAI online backends. So we decided to implement a proxy service that is able to create asynchronous jobs that will not block the Forge backend. This led us to a point where we started using Forge web triggers to get the response from our proxy server as soon as the response was ready. This problem was later mitigated when a time limit was extended to 55 seconds on async Forge events. Despite that, we decided to keep to our proxy service for more extreme cases with GPT4All with a much longer response time.
We also decided to give a chance to the UIKit 2 frontend on Forge, which enabled us to create a big portion of our main frontend module. However, due to problems with the copy-to-clipboard API call from the UIKit2 button (which plays a big role in the functionality of our app), we had to switch back to CustomUI - at least for now. On the other hand, the UIKit2 was a perfect fit for our configuration panel.
What we have learned
It is our first bigger project that uses AI, so there is a lot to learn from it. There was also a lot of experimentation with what would work best for each language model. We also got in touch with the latest Forge/Jira Cloud changes and improvements, like UIKit2 or theming for Jira dark mode.
What's next for Release Assistant
The main work-in-progress feature now is to enable users to define their own AI backends to use. With this, our users will be able to select from our pre-made public AI backends or use their own private backend. Besides that, we also have a couple more new feature improvements in our backlog:
- AI-generated status of a project,
- Publishing statuses on Atlas,
- Option to select/deselect fields that content will be generated from,
- Generating a header image for a blogpost.





Log in or sign up for Devpost to join the conversation.