One of the most promising domains for AI must be “bureaucracy” because the bureaucratic work is both structured, but it is also complex at the same time. More importantly, bureaucratic work is boring. It requires deep knowledge and expertise while deciding what to do. Probably, the forms are the most known outposts of bureaucracy. Apparently, a form is a connection point between users and the bureaucratic work. Therefore, I and my teammates have decided to make a change in web forms at a Hackathon event at Fremtind.
We asked ourselves some philosophical questions: What is the actual purpose of a form? Is it for protocol or for communication?
A form is a data collection protocol. It enforces structure, ensures completeness, and makes processing predictable. However, it is very often also used to communicate with users. If you have lived in Norway long enough, you have probably interacted with forms from NAV. A typical user may not know the basic regulations of paternity leave for example. Therefore, the user is provided with some knowledge through form interface. The same case applies also to Politi or insurance companies like Fremtind.
Here is another example about why we see forms as a protocol channel. Every web form uses its predefined date format. For instance, the system expects the following format strictly:
YYYY-MM-DD
However, if the user communicates with a human being, he or she wouldn’t say ‘2026-03-14’. The user would simply say ‘the accident happened yesterday’. Similar to this case, if we move user interaction from protocol into natural communication, we can ease a lot of discomfort caused by bureaucracy or protocol. So, a chatbot can do this for us, right? Because AI is created to communicate like a human.
Building yet another chatbot is no longer a particularly interesting hackathon idea in 2026. At the same time, a chatbot cannot replace a web form. The solution should not only work as a communication layer for the user, but it should also do some valuable work for the client. Therefore, we created FormAgent as a hackathon project. In short, the user chats with an agent, and it collects the data from the chat dialogue. Finally, FormAgent picks the correct form based on user’s case and fills it out before even the user knows what is happening in background.
So, instead of:
User → Form
We may move toward:
User → AI → Form
I don’t think that the web forms will disappear from internet world like Flash Media Player did in the past. But the amount of time that we use on forms can be reduced in the following years and this change alone could have huge impact on UX and the development of web applications.
As I pointed out earlier, finding the correct form is a true hustle for insurance customers. This is called the document routing problem. The second hustle is filling out the form. Most organizations treat these two problems as distinct sets, but they are actually intersecting sets. As the client tells more about the details of his/her accident, we can be surer about what claims form the user should choose. At the same time, again as the user tells more about his/her story, we already start collecting information to fill out the form. Please see the simple diagram below:

While we implement our solution, we have used Ollama. It is an open-source tool designed to run LLMs locally on your own machine. So that, no organizational data is shared with third parties. Ollama is a great tool to experiment AI engineering on your local device without any binding subscription or security concerns. You can easily change model and manage your models locally.
As the demo video below illustrates, the user communicates with FormAgent by answering the question. And FormAgent operates a two-channel communication during the conversation: The first channel is for human dialogue, and the second one is for saving data to backend in desired format. Based on user input, previous chat history and selected form content, FormAgent keeps the conversation going. However, the second channel creates some JSON data in requested format and saves data in background. So, we divide the communication and the protocol via these two channels.
Demo video above clearly shows the use of natural language to fill out a form. Since this is a prototype which runs the AI server on a localhost, we did not try to fill out the whole form with AI input. Because when you enter more prompts, the limited context memory of the experimental model shrinks quickly. Nevertheless, the demo shows that our solution covers different types of form components such as date picker, dropdowns, radio buttons and textarea. Another thing is worth to mention. Although we called our project as FormAgent, it does not completely fit the definition. Because AI agents are expected to collaborate with external services. FormAgent apparently does not do that, but it is more autonomous than a chat bot as well. So, it is important to point out this distinction.
Thanks to this hackathon project, we gained experience on AI engineering. For example, we learnt why tokens are being streamed while generating the response or I used to underestimate the importance of prompt engineering and now I clearly see that it makes a huge difference while developing an AI powered app.
All in all, this project gave us a new perspective on what forms actually are—and what they could become. Forms will likely remain part of digital systems, but the way we interact with them is already changing. Instead of forcing users to adapt to rigid structures, we can start adapting systems to natural human communication. If that shift happens, even partially, it will remove a surprising amount of friction from everyday digital experiences.
Sercan LEYLEK















