Inspiration

I was going through the ideas generator to see what potential ideas we could work on when I came across Neural Synchronisation Learning which takes into account real-time feedback and assessments. I came across a hackathon project through my LinkedIn connection that provides AI support to children having difficulty writing with hand tremors so while brainstorming I decided to make a platform that integrates mental health support for individuals and, a readability format for people with essential tremors and support for introverts i.e a11y

What it does

It moderates group discussions by encouraging people to share ideas if they are quiet or maybe share the air if they have been talking a lot. Provides prompts to initiate a discussion, lets everyone speak not exceeding anyone's timeframe. Providing an option for text-to-speech functionality, allowing participants with visual impairments to listen to prompts.

How we built it

This is a prototype of the project, I have built algorithms for speech recognition, sentiment analysis and core neural synchronisation algorithms to identify Speech dominance, Balanced participation, and Adaptive responses. Using Azure Speech Service for real-time speech recognition to transcribe spoken content and Implemented algorithms to analyze speech turn-taking and participation ratios, defined thresholds for identifying dominance based on speech duration and participation ratio. Integrated real-time feedback mechanisms into the discussion platform, developed algorithms to rotate prompts and topics based on group dynamics and Implemented a recognition system for balanced participation. Integrated Azure Text Analytics for sentiment analysis of written content, developed algorithms to adapt AI responses based on the detected sentiment and Implemented intervention strategies to guide discussions in response to sentiment analysis.

Challenges we ran into

We faced technical challenges during the implementation of real-time speech recognition. To overcome this, we conducted extensive testing, fine-tuning the system to ensure accurate and timely transcription of spoken content. The development of algorithms for sentiment analysis and neural synchronization posed challenges in terms of accuracy and real-time processing. We iteratively refined our algorithms to address these complexities and enhance performance. The time constraints of the hackathon necessitated careful prioritization of features. While we successfully implemented essential functionalities, some features may require further refinement and expansion in future iterations by developing a fully working application.

Accomplishments that we're proud of

I take pride in the development of innovative neural synchronization algorithms. These algorithms effectively identify speech dominance, encourage balanced participation, and adapt responses in real time based on the evolving dynamics within the group. Leveraging Azure Cognitive Services, our system can accurately transcribe spoken content in real-time, enhancing the overall user experience.

What we learned

It doesn't take a lot of effort to make things inclusive for people who might have a hard time learning things.

What's next for Neural Synchronisation Collaborative Learning System

Embedding more functionality to make it as accessible to a wider audience as we can such as combining speech recognition with gesture recognition. This can provide additional ways for participants to engage in discussions and extend language support and natural language processing capabilities to cater to a diverse user base such as support for multiple languages and dialects.

Built With

Share this project:

Updates