From Web-Forms to AI: Lessons from a Hackathon Project

One of the most promising domains for AI must be “bureaucracy” because the bureaucratic work is both structured, but it is also complex at the same time. More importantly, bureaucratic work is boring. It requires deep knowledge and expertise while deciding what to do. Probably, the forms are the most known outposts of bureaucracy. Apparently, a form is a connection point between users and the bureaucratic work. Therefore, I and my teammates have decided to make a change in web forms at a Hackathon event at Fremtind.

We asked ourselves some philosophical questions: What is the actual purpose of a form? Is it for protocol or for communication?

A form is a data collection protocol. It enforces structure, ensures completeness, and makes processing predictable. However, it is very often also used to communicate with users. If you have lived in Norway long enough, you have probably interacted with forms from NAV. A typical user may not know the basic regulations of paternity leave for example. Therefore, the user is provided with some knowledge through form interface. The same case applies also to Politi or insurance companies like Fremtind.

Here is another example about why we see forms as a protocol channel. Every web form uses its predefined date format. For instance, the system expects the following format strictly:

YYYY-MM-DD

However, if the user communicates with a human being, he or she wouldn’t say ‘2026-03-14’. The user would simply say ‘the accident happened yesterday’. Similar to this case, if we move user interaction from protocol into natural communication, we can ease a lot of discomfort caused by bureaucracy or protocol. So, a chatbot can do this for us, right? Because AI is created to communicate like a human.

Building yet another chatbot is no longer a particularly interesting hackathon idea in 2026. At the same time, a chatbot cannot replace a web form. The solution should not only work as a communication layer for the user, but it should also do some valuable work for the client. Therefore, we created FormAgent as a hackathon project. In short, the user chats with an agent, and it collects the data from the chat dialogue. Finally, FormAgent picks the correct form based on user’s case and fills it out before even the user knows what is happening in background. 

So, instead of:

User → Form

We may move toward:

User → AI → Form

I don’t think that the web forms will disappear from internet world like Flash Media Player did in the past. But the amount of time that we use on forms can be reduced in the following years and this change alone could have huge impact on UX and the development of web applications.

As I pointed out earlier, finding the correct form is a true hustle for insurance customers. This is called the document routing problem. The second hustle is filling out the form. Most organizations treat these two problems as distinct sets, but they are actually intersecting sets. As the client tells more about the details of his/her accident, we can be surer about what claims form the user should choose. At the same time, again as the user tells more about his/her story, we already start collecting information to fill out the form. Please see the simple diagram below:

While we implement our solution, we have used Ollama. It is an open-source tool designed to run LLMs locally on your own machine. So that, no organizational data is shared with third parties. Ollama is a great tool to experiment AI engineering on your local device without any binding subscription or security concerns. You can easily change model and manage your models locally.

As the demo video below illustrates, the user communicates with FormAgent by answering the question. And FormAgent operates a two-channel communication during the conversation: The first channel is for human dialogue, and the second one is for saving data to backend in desired format. Based on user input, previous chat history and selected form content, FormAgent keeps the conversation going. However, the second channel creates some JSON data in requested format and saves data in background. So, we divide the communication and the protocol via these two channels.

Demo video above clearly shows the use of natural language to fill out a form. Since this is a prototype which runs the AI server on a localhost, we did not try to fill out the whole form with AI input. Because when you enter more prompts, the limited context memory of the experimental model shrinks quickly. Nevertheless, the demo shows that our solution covers different types of form components such as date picker, dropdowns, radio buttons and textarea. Another thing is worth to mention. Although we called our project as FormAgent, it does not completely fit the definition. Because AI agents are expected to collaborate with external services. FormAgent apparently does not do that, but it is more autonomous than a chat bot as well. So, it is important to point out this distinction.

Thanks to this hackathon project, we gained experience on AI engineering. For example, we learnt why tokens are being streamed while generating the response or I used to underestimate the importance of prompt engineering and now I clearly see that it makes a huge difference while developing an AI powered app.

All in all, this project gave us a new perspective on what forms actually are—and what they could become. Forms will likely remain part of digital systems, but the way we interact with them is already changing. Instead of forcing users to adapt to rigid structures, we can start adapting systems to natural human communication. If that shift happens, even partially, it will remove a surprising amount of friction from everyday digital experiences.

Sercan LEYLEK

“…is not a constructor” error on Vitest V4

Most frontend developers hate major dependency updates, but we cannot live without them. Whenever you upgrade to a new major version, either you must make some adjustments in your config or some of your unit tests fail. Maybe both happens at the same time.

Here is the latest trouble that I have experienced while I was migrating Vitest from v3 to v4 for my React application. It points to some mock function and says this arrow function is not a constructor. (Soon, you will know why the last words were written in bold)

The error message looks scary, and I had no idea in the beginning. The line 142 is quite straightforward. It only creates a geoCoder object with google maps library. Later on, I checked my mock on google object.

// Mock 'google' object
window.google = {
  maps: {
    Geocoder: vi.fn().mockImplementation(() => {
      return {
        geocode: vi.fn().mockResolvedValue({
          results: [
            {
              formatted_address: 'Piccadilly Circus London, UK',
            },
          ],
        }),
      };
    }),
  },
};

As you see above, Geocoder is implemented as an arrow function. However, Vitest v4 is more strict on such cases. If you are supposed to mock a class, then you cannot use an arrow function as a replacer anymore. This is basically what the scary error message was all about. I figured this out thanks to reading the migration documentation. The doc says: “If you provide an arrow function, you will get <anonymous> is not a constructor error when the mock is called.”

Eventually, I converted Geocoder into a mock class and added geocode() function under it. So, this is how the same mock object looked like at the end:

// Mock 'google' object
window.google = {
  maps: {
    Geocoder: class {
      geocode() {
        return {
          results: [
            {
              formatted_address: 'Piccadilly Circus London, UK',
            },
          ],
        };
      }
    },
  },
};

And the issue is solved. I hope this example helps someone who faces the same error.

Sercan Leylek

Easter Egg Driven Testing (E²DT)

Small, hidden surprises can be a creative strategy to drive exploratory testing. Such initiatives can also boost team engagement—especially in expert tools or internal systems where conventional testing incentives fall short. That’s why we invented a technique called E²DT: Easter Egg Driven Testing. In short, the developer of a new frontend feature adds an easter egg into their feature branch, and while colleagues try to discover the egg, they test the new feature at the same time.

Like most IT paradigms, E²DT emerged from a real need. Last year, my team was tasked with re-writing a medium-sized frontend application, internally referred to as “Admin.” This tool, used by around 20 employees at Fremtind, is both intimidating and dull. When Admin is mentioned in a meeting, those unfamiliar with it usually go quiet and simply listen.

After the re-write process, our team organized a test party and invited expert users from other teams. Although participants were naturally curious to explore the new Admin and its features, we needed to motivate them to achieve greater test coverage—because the Admin app has many complex functions. So, I added an easter egg and told the participants about it. Their curiosity transformed the testing party from a dull QA session into an exciting challenge.

My easter egg was fairly typical. I placed it on the “Page Not Found” screen. Whenever a user visited an unsupported URL, a “Page Not Found” message appeared in the style of the Breaking Bad logo. After a few seconds, the user’s name was animated on the screen. There were also text boxes where users could enter any name, creating a mini-game. The easter egg’s algorithm checked the input and generated chemical symbols from the periodic table.

Rules for Running an E²DT Session

This experience inspired me to apply easter eggs in a more structured way. Here are the key rules I now follow:

Rule #1: Test assignment should be boring or scary.

Everyone loves a challenge, especially one that feels like a game. Most employees avoid learning about tedious or complex product features—but these are often the most critical ones. E²DT gives the team an engaging reason to explore the dark corners of the product.

Rule #2: Test assignment should be large enough.

A bug fix or a small feature is not suitable for E²DT. The development task needs to be broad enough to hide multiple easter eggs.

Rule #3: Hide your eggs across different browsers.

Cross-browser testing is essential. Placing easter eggs in different browsers encourages it. For example, hide one in Windows Chrome, one in Safari on macOS, and another in a mobile browser.

While running an EDT session with my colleagues at Fremtind.

Rule #4: Document your eggs.

Keep a cheat sheet that includes:

  • The device or browser needed to trigger the egg
  • Where it’s located
  • How to trigger it

Make sure this list remains secret until after the E²DT session.

Rule #5: Keep eggs within the feature context.

Don’t place your easter egg in a random part of the application. For example, if your task involves changes to the accounting page, hide your eggs there—not in the app’s header or footer. This keeps users focused on your changes. Also, vary the difficulty of your eggs: some easy, some medium, and some hard to find.

Rule #6: Use a separate branch for easter eggs.

Do not commit your easter egg changes into the same feature branch. Instead, create a new branch from your feature branch to implement the eggs. After the testing is complete, you can discard the egg branch before merging the main feature branch.

Rule #7: Invite the whole team.

Organize a dedicated E²DT meeting and invite everyone—frontend and backend developers, testers, designers, the product owner, etc. People from all roles become testers in E²DT. If you’ve added 3 eggs, 45 minutes should be enough.

Tips for Running the Session

Tip #1: Run a short introduction

A short but organized introductory talk will guide your participants. You should provide info about following list of topics:

  • Introduce your feature branch.
    • Every participant should understand what problem your feature branch solves.
    • Testing shows the presence of defects, not their absence. This is the number one testing principle according to ASTQB. Make sure that the participants understand this principle. The primary goal of the event is to find the bugs.
  • Do not forget to mention that easter eggs are hidden in different devices and browsers.

Tip #2: Keep the game fair.

Some of your colleagues will ask for tips to discover the eggs. Especially frontend developers will use “Inspect” window and will try to read your code via browser. Watch out for those cheater wallhackers!

Tip #3: Observe

When a participant discovers a bug, you should write it down at once and let the participant keep testing and trying to discover some easter eggs. Also, when a participant discovers an easter egg, announce it immediately. E²DT does not work best with home office participants, because the remote employees become disadvantaged during the event. (Like people start talking at the same time or those who are in the office can instantly show their screens to each other, etc.)

Creative Easter Egg Ideas

E²DT is a great outlet for developers to unleash their creativity. Sometimes, these small, playful additions can even evolve into real features. Here are a few easter egg types you might try:

Keyboard navigation

Great for accessibility testing. Trigger effects (like an “earthquake” animation) when users tab over a certain button. Or reveal a hint that leads to a hidden animation when they type the right word. Below is my example. When the user types “hei”, Super Mario gif shows up.

Drag and drop

Make an element draggable and trigger an animation when it’s dragged.

Clicking several times

Change an image if it’s clicked multiple times—revealing your easter egg.

Easter Egg Driven Testing has been well-received by my colleagues and has proven to be a fun and effective team-building activity at Fremtind. It gives participants a fresh perspective on the product, encourages knowledge sharing, and offers a playful but powerful way to engage users. Creating and discovering easter eggs is both fun and educational, giving frontend developers a chance to experiment, learn, and show off their skills.

Sercan Leylek

References
[1] https://astqb.org/istqb-foundation-level-seven-testing-principles/

My Cloud Journey

Will you be working with the same thing that you do today after 10 years? This question alone might be the foundation of all the tech learning industry. Everybody is so eager to learn the next thing at their field. Innovation is the essence of what we do. It’s been this way since the first day of IT business, and it will not change any soon. Organizations make yearly reports to find out how much innovation they should introduce to their services, technology radars are updated, and the employees are persuaded to improve their skills. However, most of us are stuck in our current roles. Most people are focused on the technology set within their roles. Because this is their home. I wanted to get out of home, so my cloud journey has started. As Leo Tolstoy says, “All great literature is one of two stories; a man goes on a journey, or a stranger comes to town”.

Getting out of home is one thing but knowing where to go is a different thing. I have been working as a full-time frontend developer for some time, and I knew that I should have stretched my legs with some new technology, but I was not sure of what to challenge. At the same time, my organization decided to move from on-premises to AWS cloud technology and I adopted this organizational challenge as a personal challenge. The plan is simple: If Fremtind AS learns AWS, so will I. Growing with your organization is a smart move. For example, if an organization decides to invest in IoT, then they will hire people with IoT experience, and you can use these smart heads as your mentor. So, instead of walking, you can take a bus onto the next stop of your journey.

On day one, I knew very little about cloud technologies. I knew that they help you scale your resources when there is a rapid demand from customers. I knew that your application would have higher availability because of their data centers located in different locations, but I had no deep insight about how an application works on cloud. Therefore, I started from the very beginning with baby steps. More importantly, I needed small achievements along my journey and I picked the easiest target first: To be certified as AWS Cloud Practitioner.

When you are certified as an AWS Cloud Practitioner, that means you know what cloud service solves which IT problem. Plus, you get a taste of how these tools are working. The whole concept is actually very similar to learning algorithms and data structures. For example, if we should implement an undo functionality while implementing a text editor, we should choose stack as our data structure. We all learnt this at universities. You do something similar while preparing for the AWS exam. For instance, if you have a web application with global ambitions, then you should use CloudFront’s Edge Locations to serve your web application via different locations on Earth.

Besides, learning the cloud technologies from AWS does not bind me to Amazon. As I said earlier, most of AWS services are common IT problems in cloud technologies. If you understand how it works in AWS, you can easily understand how it works in Azure, GCP or some European cloud technology distributor. The problem is the same, but the service providers and their service names are different. AWS uses CloudFront’s Edge Locations, Azure calls the same service as Azure Front Door. That’s it.

Certifying myself as a cloud practitioner gave me more vision along the way and I decided to move to the next stop: Becoming an AWS Certified Developer Associate. This challenge would help me gain more practical experience and knowledge. However, I quickly realized a dilemma for all newcomers. On one hand, AWS recommends having at least one year hands-on experience in developing or maintaining applications by using AWS services to get this certificate. On the other hand, if you don’t have a cloud developer certificate, you cannot land a job to have the required experience. This is a typical chicken-egg problem.

To solve this dilemma, I used a practical approach. I have studied the exam curriculum theory and made very small practical examples. So, I used a combination of theory and practice. Finally, I passed the certification exam. However, becoming certified felt like nothing changed for me. I had to move forward from my second stop to the third one. Here are the requirements for my third destination point:

  • I had to solve some real-world problem. That would help me engage.
  • The solution of problem should have required several cloud technologies, so that my experience outcome could be more fruitful.
  • My solution problem should be needed by others. This would help me get feedback on what I create.
  • It should be scalable. I should be able to implement some new features in the future, so that I can gain experience in more technologies.

As a result, I created EQMA (Earthquake Monitoring App). EQMA periodically fetches earthquake logs from a target URL, parses them and loads these logs into a database. When a user visits my web application, they can view the recent earthquakes with more efficient design. Here is its architecture diagram:

EQMA as a project engages me because I migrated to Norway from Turkey in 2009 and seismic activities are quite high in Turkey and also in neighbor countries. Unfortunately, thousands of people lost their lives, homes in several major earthquake disasters. However, if I write “latest earthquakes” (in Turkish) on Google, the first result is Kandilli Observatory (http://www.koeri.boun.edu.tr/scripts/lst6.asp).

Their website was deployed in 2006, and I get stressed very often when I visit this website while trying to figure out whether another major earthquake occurred in Turkey or not. It is very easy to be mistaken while reading the magnitude of an earthquake. I very often read the depth of a magnitude instead of its magnitude.

The real-time data delivered from Kandilli Observatory is valuable because they have lots of sensors in and around Turkey. However, how they deliver the content is inefficient. Therefore, I created EQMA and turned the same data into something like this:

Working with a simple monitoring tool has been an educative experience for me, but my journey could not stop here. One of the best ways of truly learning something is teaching to others. Therefore, I created a video course which shows every implementation detail of my project.

Less than a year ago, I barely knew what “serverless” meant – today, I’ve built a complete cloud solution and created a free YouTube course about it. It all started with a desire to learn AWS from scratch and ended with new opportunities for me. As a frontend developer, I wanted to broaden my horizons, and I found great joy in building an application that solves a real problem for myself and friends. Now, I want to share what I’ve learned and inspire other developers to take the leap into the cloud – not just with theory, but with real code, real challenges, and a real sense of accomplishment.

Sercan LEYLEK / OSLO

Migrating React-Query from v4 to v5

React Query is a library for managing, caching, synchronizing, and updating server state in React applications. If you are reading this post, you already know that much 🙂

Tanstack introduced React-Query’s latest major version in November 2023 and I decided to migrate my web application to version 5 lately. In this post, I will share some of the experiences that I had along the way.

Step 1: Refer to Tanstack documentation for migration

Get some information via Tanstack’s page on Migrating to TanStack Query v5. Like most developers, I also tried my chances by trying jscodeshift’s codemod. So, I used the following command on my terminal:

npx jscodeshift@latest ./path/to/src/ \
--extensions=ts,tsx \
--parser=tsx \
--transform=./node_modules/@tanstack/react-query/build/codemods/src/v5/remove-overloads/remove-overloads.js

Codemod somehow did not work and it returned the error message below:

npm ERR! code ENOENT
npm ERR! syscall lstat
npm ERR! path /Users/sercan.leylek/.npm/lib
npm ERR! errno -2
npm ERR! enoent ENOENT: no such file or directory, lstat '/Users/sercan.leylek/.npm/lib'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent

I had no idea about what .npm folder did until this moment and ChatGPT told me that
it is a folder typically found in a user’s home directory (~). It is created when you install Node.js and npm (Node Package Manager) on your system.

Cool, but how to get a lib folder under .npm? Of course, I did not manually create a folder there because modifying the contents of .npm could potentially disrupt my Node.js environment. After a lot of various useless attempts (such as deleting node modules, emptying cache, …), I finally had a solution. This problem is actually related to jscodeshift. So, I installed it globally with the following command:

npm install -g jscodeshift

Installing jscodeshift globally on my mac, created the required lib/ folder and I finally managed to get a bit further. But, this time another error bugged me.

Error [ERR_REQUIRE_ESM]: require() of ES Module ...
remove-overloads.js is treated as an ES module file as it is a js file whose nearest parent package.json contains "type": "module" which declares all .js files in that package scope as ES modules.

I knew why this was happening. I had recently migrated Vite and their latest version requires ESM module in your npm set up. So, I removed “type”: “module” from my package.json file and deleted package-lock.json and installed node packages again. Codemod finally worked. It refactored some of the lines in my codebase, but it also reported that I had to fix many other functions manually.

All in all, I had lots of efforts to run codemod, but the outcome was not quite effective. Nevertheless, it was worth to try it.

Step 2: Identify what to fix in your codebase

Jscodeshift’s codemod will deliver you a list of function that you should fix. It is a good starting point, but it might not include all of the lines that should be fixed. Therefore, I recommend to use tsc-check if you are on a typescript project. This will give you a complete picture of what should be fixed.

npm run tsc-check

Step 3: Transform query functions to v5

The most common problem that I had was in my useMutation functions. As Tanstack documentation says, new React-Query supports a single signature, one object. Replacing onSuccess with onSettled or using mutationFn with an object of arguments is quite a repetitive work. Therefore, I won’t go through all of the details. With this blog post, you know where to fix the migration errors and how to overcome potential obstacles with jscodeshift’s codemode.

Good luck!

Sercan Leylek

Fixing husky/lint-staged command not found error

I ran into this annoying error while switching from WebStorm to Visual Studio Code. After I prepared my first commit on VS code, I received the following error message:

“.husky/pre-commit: line 4: lint-staged: command not found
husky – pre-commit hook exited with code 127 (error)
husky – command not found in PATH=/Library/Developer/CommandLineTools/ blah blah”

Solution

I use husky version 8.0.1 and this is how my pre-commit file looked like:

As most error messages do, this one also refers to the correct location to fix the issue. I changed the line 4 as below:

npm run lint && npx lint-staged && npm run stylelint

It worked like charm. I hope this suggestion works on your case too 🙂 The same error can occur because of several reasons. I am aware that my suggestion may not solve the issue for all. However, here is some more insight about my problem.

I noticed that running “lint-staged” command did not work when I ran it at Mac’s regular terminal. It did not work at Visual Studio code terminal, either. But it somehow worked fine in WebStorm’s terminal. I still don’t know why. WebStorm might be adding npx to package commands before running it? Or it is added to some other custom path. This does not matter much.

npx is node package runner, so expecting “lint-staged” to work on its own is not possible at VS code terminal. I remembered this fact while I was checking the github documentation of lint-staged 🙂

Sercan Leylek / OSLO

New approach to same key problem in React

Every React developer must have received following warning on their browser console: “Warning: Each child in a list should have a unique “key” prop.”. Well, most of us know the answer to this problem. React JS uses virtual dom algorithm and it expects you to assign a unique key value to your list items. Let’s assume that you have an array of people and you are rendering those people one by one as div or li items. There are plenty of articles that offer the solution for an ideal world. However, I will introduce you a new approach which solved the case in my real world.

Before we dive into the problem, let’s have a quick summary of some of the best practices out there.

  • Every list item should have a unique key.
  • Key value should not be changed between renders.
  • Do not use index value (such as incremental array loop value) as React key.
  • Do not use random value as React key (such as some alphanumeric value produced by nanoid).

All of these sounds fair. If you fail at one of these points listed above, your application behaviour will be unpredictable. If you settle with these advices, your application may certainly work fine, but in my case, it simply did not. Because business realities are not like how things work in best practices’ book. There are priorities, costs and many other boundaries while trying to solve a front-end problem.

Definition of problem

Below is a component which helps the user register as many people as s/he wishes. The user should input first name, last name, and social security number attributes of the subject person. After the user fills in the info about first person, clicks ‘add person’ button and so on. The user also has delete option. Please watch the following gif animation.

Vertical design demo

As you must have noticed, there are severe problems in this component and all of them are related to same React key issue.

  • onBlur event on each text-input causes a loading animation. Why?
    • To block the user from typing too fast. Otherwise, I had more serious data related problems.
  • Webpage scrolls to some random position some times. Why?
    • Since there are duplicate items with the same React key, the browser does not know where to render.
  • Dynamic validation and keyboard navigation are almost impossible. Why?
    • Because again browser does not know where to show the error message or where you were after each render.

This is a nightmare.

Our backend systems send us the same key value for each tuple. Think it this way: There are 3 values for each person in same tuple (first name, last name and social security number). For example, every first name input box has the same key value in Person 1, Person 2, … Person n. We have to solve this problem without making any changes in backend data. This is the true origin of all the problems above.

Solution

In an ideal world, you should make the backend to send your front-end application unique keys for every data tuple, right? But assume that our backend is not able to solve it because of its other dependencies with other internal systems. Life is not perfect.

Therefore, I tried the forbidden best practices hopelessly to cure the issues. First of all, you cannot use loop index value attached to your key values because when you delete one person, the next persons get shifted backwards and all you get is a more confused state. Secondly, when you use random values as React keys (via nanoid), then your application is changing key values after each render and the browser gets confused. Apparently, this is a dead end. So, what to do?

Advice #1: Rethink your design

The component in gif animation above uses vertical design. In other words, every data tuple is rendered and as the user adds more items (persons), the list is growing. If we replace vertical design with horizontal one, every component would have a unique React key. As long as you render one unique id per component, every thing should work fine, shouldn’t it? Watch the next gif carefully if it is working or not.

Horizontal design demo

Nope! It is not working, but why not?

We have two persons in our list:

  • Person 1: John Bravo
  • Person2: Sara Brown

The user clicks the navigation buttons. The header and button elements are incrementing from Person 1 to Person 2, but the values in text boxes are still the same. The reason is violation of unique React keys. You might be rendering unique key one at a time, but the keys of Person 2 are the same as Person 1. Therefore, the virtual dom assumes that the values in these text boxes are not changed at all and it does not re-render because of its reconciliation algorithm.

“When comparing two React DOM elements of the same type, React looks at the attributes of both, keeps the same underlying DOM node, and only updates the changed attributes.” (Reference: React Docs)

Rethinking design is not an ultimate solution for this case, but it will still help.

Advice #2: Bypass Reconciliation Algorithm

If you learn the rules well enough, you can break them in your own favour. My next advice is a pure hack, but it will be a solution. I actually got the inspiration from database studies. Do you remember the terms ‘primary key‘ and ‘foreign key‘ from DMBS?

A primary key is “which attributes identify a record,” and in simple cases constitute a single attribute: a unique ID. (Ref: Wikipedia)

A foreign key is a set of attributes in a table that refers to the primary key of another table. The foreign key links these two tables. (Ref: Wikipedia)

The definition of primary key is very similar to React keys and foreign key is what we are going to create here.

I want to have a control on when the component will render or not. So, I should have a mechanism which will inform React’s reconciliation algorithm it is correct time to render the DOM.

Firstly, create a hook which will count the renders, but not all renders. This hook will keep the renders that I approve.

const [renderCount, setRenderCount] = useState<number>(0);

Second, create a useEffect(…) hook with dependencies that I choose.

useEffect(() => {
  setRenderCount(renderCount + 1);
}, [index, deleteItemTriggered]);

Here is our foreign key. The cooperation of index and deleteItemTriggered hooks. index is changed whenever the user switches from Person 1 to Person 2 (Navigation button click). deleteItemTriggered hook is self-explaining. Whenever the user clicks on delete Person button, this boolean hook is fired. The combination of these two events create my new unique key, primary key.

Lastly, I use renderCount value in addition to the component ID which is duplicate throughout different persons.

<div key={`${element.id}_${renderCount}`}>
   {RenderTextInput(element, ... blah blah)}
</div>

Here is the result:

Everything works fine 🙂

No more random scrolls. No more loading animations. Everything works as it should.

To solve this problem, I googled so many posts and web pages. There was no what-if scenario. What if I am supposed to solve my problem without unique keys given to me via my backend? I could not find an answer to this problem. So, these two advices explain above helped me solve the problem for good. This might be an inspiration for those who got stuck in real world just like I did.

If you have a better solution suggestion, please drop a comment.

Sercan Leylek / OSLO

Free Artificial Intelligence Course

As a front-end developer, I happen to hear these terms quite often: Deep learning, machine learning, neuroscience, … However, I did not have any knowledge nor considerable experience in these fields. Also, when you don’t know something, it scares you even more, because fear leads to the dark side.

Many developers have the fear of replacement in the future. We fear that artificial intelligence may suddenly cast us out. This is certainly because of the ignorant and lazy man inside of us all. Therefore, I decided to thinner this grumpy man’s voice in me and followed the artificial intelligence course created by University of Helsinki and reaktor.com.

Course Content

Elements of AI will introduce you common AI patterns, terminology, problem solving techniques and many other theoretical topics that you should cover. Their curriculum is well organised and comprehensive. All in all, this beginner course helps you build fundamental knowledge to start your AI development career. On the other hand, the creators of the course aimed to target a broad range of individuals. Therefore, it is not specifically tailored for programmers or other IT roles. So, you don’t have to have programming skills. However, if you are already an IT professional, this is a big plus to comprehend the discussions.

Here is a list of my favourite topics throughout the course:

  • How to define AI?
  • AI for board games like tic-tac-toe, go, chess.
  • Why we should use different methods to solve different problems?
  • How can we reach for General AI?
  • AI Winter
  • The Bayes Rule
  • The types of machine learning
  • Neural networks

Exercise Tips

You will be asked to complete some exercises on almost every section. Most of them are tests that can be answered with some radio buttons. However, I recommend you to be careful while answering the questions. Not all of them are easy to answer and there are some tricky questions. So, think twice before pressing the submit button. I got 85% success rate, but it would be higher if I were more careful and patient before making my mind about the answers.

The course also provides some open-ended questions. Elements of AI created a smart solution to review the answers. They use the other students’ judgment to score one another. For example, after you write your answer (several paragraphs) for an open-ended questions, you are required to review 3 other students’ answers for the same question. So that, the community builds a fair and reliable control mechanism for itself.

After the course completion, you will receive a free digital certificate approved by University of Helsinki. Of course, the certificate is just a make up. What you have learnt and what you will do with gained knowledge is way more significant than a piece of paper. However, this certificate might have academic value for university students. Elements of AI grants you 2 ECTS credits. According to the info on their website, these credits are only valid in Finland, but you may try to convince your faculty in another EU country. Just try.

What’s next?

I have been informed by Elements of AI, thanks to the continuous growth environment of Fremtind AS. My colleague Berit Klundseter dropped an introductory post about this course via Fremtind’s social network. Then, I talked to our Senior Machine Learning Engineer Emanuele Lapponi and he recommended another course where I could program some practical cases. So, my AI journey continues with the support of my colleagues.

I hope this post made you curious about AI topic and if you also have dedication to finish the course, may the force be with you! 🙂

Sercan Leylek

How to mock addEventListener in React Testing Library?

This question puzzled my mind for some time and I noticed that there is no good post which provides an up-to-date answer to the challenge. Basically, this article will help you solve your testing problem in React and I also provide you a sample project with a working application.

Description of the problem

You have a component which adds some event listener inside useEffect() block. You wrote some unit test, but quickly realised that your unit test runs the code inside useEffect, but it does not run the function that your event listener is adding. So, you should basically mock addEventListener to solve it, but how?

Sample Project: offline-modal

offline-modal is a simple application which utilizes online/offline events. (See reference here: developer.mozilla.org)

When you turn off internet on your machine (or by using the throttle functionality on Chrome), the application shows some information to the user. After the connection is asynchronously restored, the user receives the feedback about connected internet for 3 seconds. And this message also finally disappears.

Offline Modal Demo

Implementation of offline-modal > OfflineModal.tsx

Before we jump into the unit testing discussion, it is wise to review the implementation.

const OfflineModal: React.FC = () => {
    const [connStatus, setConnStatus] = useState<number>(ConnectionStatus.Online);
    const connStatusRef = useRef(connStatus);
    connStatusRef.current = connStatus;
    useEffect(() => {
        const toggleOffline = () => {
            setConnStatus(ConnectionStatus.Offline);
        };
        let timer: NodeJS.Timeout | undefined;
        const toggleOnline = () => {
            setConnStatus(ConnectionStatus.SwitchingOnline);
            timer = setTimeout(() => {
                if (connStatusRef.current !== ConnectionStatus.Offline) {
                    setConnStatus(ConnectionStatus.Online);
                }
            }, 3000);
        };

        window.addEventListener('online', toggleOnline);
        window.addEventListener('offline', toggleOffline);
        return () => {
            window.removeEventListener('online', toggleOnline);
            window.removeEventListener('offline', toggleOffline);
            if (timer) {
                clearTimeout(timer);
            }
        };
    }, []);

I will explain the critical lines here.

Line 2: This our state management which keeps track of current connectivity status.

Line 19 and 20: I basically add event listeners to follow connection status here. Two separate functions are assigned to online and offline cases.

Line 22 and 23: Before the component (OfflineModal) is unmounted, we remove the event listeners.

In the latter part of the component, I render 3 different connection status (Offline, SwitchingOnline, Online) in a Switch statement to inform the user. That’s basically it.

Unit Test Implementation > OfflineModal.test.tsx

This is the critical part. I wrote the function below to mock internet connection in Jest environment.

const mockInternetConnection = (status: string) => {
    const events = {};
    jest.spyOn(window, 'addEventListener').mockImplementation((event, handle, options?) => {
        // @ts-ignore
        events[event] = handle;
    });
    const goOffline = new window.Event(status);
    act(() => {
        window.dispatchEvent(goOffline);
    });
};

Line 1: I created a parameter called status. In this example, example input could be ‘online’ or ‘offline’ string values. These are actually the event names to be added.

Line 2: An empty json object is created.

Line 3: The core of mocking happens in this scope. We spy on window.addEventListener call. Whenever subject code runs this function in due implementation, we will add the corresponding event type into its events array. So that, we will be able to run requested event in our unit test. This event can be any other event such as click, keydown, etc.

Line 7: This line creates a fake event. But it does nothing on its own. I only assigns the fake event object into a const called goOffline.

Line 9: Here we trigger our fake event just like the implementation code does in reality.

After creating the function (mockInternetConnection), writing the unit test suite is much simpler.

describe('OfflineModalComponent', () => {
    it('switches offline and back online', () => {
        render(<OfflineModal />);
        mockInternetConnection('offline');
        expect(screen.getByText('You are offline')).toBeInTheDocument();
        mockInternetConnection('online');
        expect(screen.getByText('It worked! You are back online! :)')).toBeInTheDocument();
    });
});

This unit test turns off the user’s internet connection and verifies the correct output. Later on, it switches the connection back online and runs one more check.

I hope this blog post helps you solve a similar problem on your side 🙂 Good luck!

Here is the link to the GitHub repository: https://github.com/SercanSercan/storksnest/tree/master/offline-modal

Sercan Leylek

6 Advice on Mixpanel Integration

User analytics is one of the most fundamental features that every popular web application should perform. Without collecting data about user activities, one can never be sure of the user experience. The developer team will invest days on some functionality, but unless they know how many users use this precious feature, they will only guess and hope that the users are benefiting from it.

In other words, “Without data you’re just another person with an opinion.”, W. Edwards Deming.

To clarify the ambiguity, my production team (http://meldeskade.no) decided to use popular business analytics tool, Mixpanel. Our single page web application is built with ReactJS, uses Jøkul Design System and hundreds of users visit the website to report their insurance activities every day. In short, meldskade.no is a typical SPA which promises the user to follow a set of operations under some sections and completes the user request somewhere along the way.

From the perspective of a front-end developer, I thought including Mixpanel tracking would be a simple task. However, as our team gained more and more experience in the field, I started to realize the obstacles. We followed the path of ‘learning by doing’ and eventually, our Mixpanel tracking started to deliver us data. So, this article will help to those who are in the beginning of their Mixpanel journey and looking for some useful advice.

#1 Don’t underestimate the integration work

Know your enemy first! Adding Mixpanel to your web application is an integration process and system integration is maybe the most problematic fields in Information Technology. Because linking different systems increases the complexity. Although the relationship between a web application and Mixpanel is a one-way road, the integration will still be painful, and it has the potential to produce unforeseen errors.

For instance, if you perform some tracking event before initializing the Mixpanel with the code below, the application will have a fatal error and whole website will be down for the user. This error can seem obvious, but one exceptional case for some users might be the end of the show. Therefore, do not underestimate the complexity of integration process.

import { MixpanelUtils } from './MixpanelUtils';
...
MixpanelUtils.Init();

#2 Document every tracking operation first, code later

Teamwork is required in order to integrate a web application with Mixpanel. Naturally, a web developer cannot sit alone and write all the tracking operations on his/her own for a large system. Product owner, architect, tester, and tech lead should give their consent on what user activities will be tracked besides developers. Since people from different roles are involved in the process, it is important to keep decisions in one open document.

The team of meldskade.no created a table which lists all tracking events. We included many columns for every event such as definition, trigger case for the tracking, its parameters, development status and we even wrote down the JIRA ticket for each tracking. This strategy helped the team solve the complexity of the problem.

Besides, we still keep this document active. Whenever the team needs to implement a new Mixpanel tracking, we start the work by documenting. So, document first, code later.

#3 Don’t try to track too many events on first deployment

Since our team underestimated the complexity of adding Mixpanel to our website, we set an ambitious goal for our delivery. We tried to implement at least 10 different tracking operations in one go. However, you should also implement configuration code for Mixpanel. These are OptInTracking, alias, identify, getting cookie consent of the user, etc. This is the threshold that one team should initially come over to reach for tracking events. When a team wraps all of these in a single deployment, the development work grows much more than expected. Therefore, the goal on first deployment should be a successful configuration of Mixpanel and a couple of meaningful tracking operations.

#4 Surprise! Surprise! You need a backend developer 😀

This is a point that our team failed to foresee. We basically thought we had all the data out there. We should only write some well-organized tracking operations in JS and everything will go smoothly. However, the data flow of a web application is not like what you see on a webpage as a human being. For example, you want to track some category data after the user clicks a section, but the component that you are triggering the event from may not contain the category information as a prop. Therefore, you need to ask the backend developer to provide this additional data via corresponding API call.

This tip is very much related to the advice #2. If the developers communicate about where and how to obtain the data attributes in the documentation, the process will go simultaneous and smooth.

#5 You will discover defects along the way

This might be one of the greatest paybacks for all the effort performed to reach the goal. As you implement more tracking on different components of a React application, you will discover some defects that have been omitted by you and other fellow front-end developers. React redundant rendering problems are known, but mostly ignored defects. And they are not easily discovered by tester, neither they truly create some run time error, but they are there. While adding more tracking operations, you will clearly see those mistakes and you will have the opportunity to fix them with some refactoring process. This practice will increase the performance of the application.

#6 GDPR is everyone’s responsibility. Speak up when in doubt.

Every team member should be observant on potential GDPR violation. More data does not help to understand user interaction better, but key data does. Users’ private data does not play any role while measuring the efficiency of an application. Therefore, speak up when you are in doubt of some data that your team is about to collect.

Sercan Leylek