Inspiration
Have you ever worried about your grandma after something she said? Don't worry, we have too.
Over 7 million Americans don’t realize they have mild cognitive decline (MCI), and studies show that by age 70, two out of three will experience some level of cognitive impairment. Nearly 37% of women and 24% of men are at risk for long-term cognitive issues, making brain health a growing concern—not just for older adults, but for seemingly healthy individuals too. Early diagnosis and proactive care are key to preventing our loved one's long-term damage. For those already showing signs, it's crucial to provide accurate diagnostics and connect them with smart, easy-to-use digital tools that fit into our everyday lives.
With the rise of GenAI and advanced visual-intelligence & databases for predictive healthcare, we went out to create a novel multimodal way for individuals to participate in cognitive and neuroscience-backed tasks through their own household objects.
What it does
CogniBoost uses your household objects and advanced computer-vision, augmented reality, and RAGs to process multimodal sources of input, mostly visual data, and spatiotemporal marked tasks to generate dynamic short-length tasks for your everyday acute-cognition strength. The goalWe focus on three of the major contributors to growth in sharpness and their useful visuospatial tasks:
Users upload photos of objects around them, and CogniBoost uses our built vector-search & database (on top of InterSystems IRIS) and generative AI to pull users' personal similar yet contextually different multimodal data. These image activities are used to strengthen their understanding of logical differences in their everyday objects and their personal belongings (task backed by NIH), which are then validated and scored by an LLM.
CogniBoost directs users through neurophysical tasks (i.e. classification, logic puzzles, height-ordering, garden and species identification) generated through spatiotemporal scene understanding and LLMs.
CogniBoost also looks at the diagnostic side of preventative healthcare, using feature capabilities for quick diagnostics with basic predictive analytics through hand-tracking computer vision tasks.
How we built it
See our diagram below.
Challenges we ran into
We had a lot of core functionalities we were super pumped about building out with some great APIs. However, a lot of my time was spent figuring out how to efficiently route our data processing to our three servers, which all had varying LLM, RAG, or CV pipelines. It was really useful to have cool vector stores + search capabilities through our data that made this process way better and centralized to our case. Definitely challenging trying to get our advanced visual backend to render well in this short 36 hours - but it worked out!
Accomplishments that we're proud of
We were able to integrate various types of APIs and create a really immersive application that touched on different pain points we wanted to hit traditional cognitive/physical task systems (hardware costs, not immersive, not engaging).
What we learned
Merging with so many APIs + pipelines can be hard but its doable!
What's next for CogniBoost
Picture this: Grandma steps into her garden, pulls out CogniBoost, and we ask her to take a photo. Our scene recognition system identifies her gorgeous garden, suggests a flower arrangement, and guides her to organize them by flower types. Your grandma probably doesn’t even know it, but she’s doing what she loves with the items closest to her and taking steps towards her long-term neurogenic future.


Log in or sign up for Devpost to join the conversation.