Inspiration

Two of the most popular theories when it comes to learning languages are the Input Hypothesis and the Output Hypothesis by renowned linguists Stephen Krashen and Merrill Swain. Input Hypothesis refers to learning the language through progressively understanding the language by receiving the information (Krashen, 1985). This is the learning model adopted by most language learning applications such as Duolingo and Memrise. However, Output Hypothesis is often ignored in these applications. This refers to applying and expressing yourself in the language (Lantolf, 2013), used in applications such as Preply and Tandem. However, they do not focus much on Input Hypothesis and also heavily require the help of other people.

Therefore, we decided to make an application that combines both learning models for the best learning experience into speaking like a local through the use of immersion in real life scenarios.

What it does

Through the usage of artificial intelligence models specializing in languages and conversation, users are able to practise speaking in a foreign language in all sorts of different real life applicable scenarios with varying levels of difficulty. They will be able to have their mistakes corrected in the conversation and translate and add words that they do not understand to their ever growing vocabulary and grammar list that they can then test themselves with. These scenarios will help users understand the structure and context of the language they are expressing and enable them to learn from their mistakes.

How we built it

We used chatandbuild to construct the UI/UX of the application and the vocabulary list features. We integrated the gemma3 language learning model in Ollama for translation and to make various immersive scenarios and personalities. We also used ElevenLabs AI for our speech to text and text to speech function.

Challenges we ran into

The implementation of the LLM to the backend and the frontend was cumbersome especially since we have little to no experience in doing so. When using chatandbuild, the application that was generated was rather complicated and extra time was put in to understand what was going on.

Accomplishments that we're proud of

We were able to create the application using a new website that we have never used before in chatandbuild. We were also able to integrate an AI that has different personalities and be able to adapt in different scenarios through feeding it prompts to hold an actual conversation. The translation feature was one that troubled us a bit but we were able to translate and save the words accurately.

What we learned

We learnt a new way to create a website. We also learned new apis related to AI that we can use in the future. We also planned too many features at the start that we struggled with so we learnt to manage expectations and plan our goals better to ensure we perfect our main features first before branching to others.

What's next for GO/ON

We plan to expand the application to mobile phone as well and integrate a database to store the information of the users. We will also allow users to add their custom scenarios. We will add a feature where the AI can test the users on the contents in their vocabulary list and for the scenarios to use the words the users don't remember more often. Finally, we will add more languages, scenarios and personalities as well.

References

1) Krashen, S. D. (1985). The input hypothesis: Issues and implications. Longman.

2) Lantolf, J. P. (Ed.). (2013). Sociocultural theory and second language learning (ebook ed.). Oxford University Press.

Built With

  • chatandbuild
  • geminidb
  • radis
  • react
Share this project:

Updates