Users upload photos of objects around them, and CogniBoost uses our built vector-search & database (on top of InterSystems IRIS) and generative AI to pull users' personal similar yet contextually different multimodal data. These image activities are used to strengthen their understanding of logical differences in their everyday objects and their personal belongings (task backed by NIH), which are then validated and scored by an LLM.
CogniBoost directs users through neurophysical tasks (i.e. classification, logic puzzles, height-ordering, garden and species identification) generated through spatiotemporal scene understanding and LLMs.