Inspiration
For years, neuroscience has struggled with a fundamental gap: we can collect enormous multimodal brain datasets, but we still can't understand the deeper geometric structure of cognition.
EEG signals, biometrics, and behavioral data all hint at high-dimensional cognitive states, yet current tools reduce everything to a handful of flat labels like “left hand motor imagery” or “attention vs rest.” We wanted something deeper — a system that could map, interpret, and reason about the brain’s latent manifold in real time.
When we saw SpoonOS's agentic infrastructure, we realized something huge: for the first time, we could build an AI system that behaves like a neuroscientist.
SpoonOS gave us:
- native multi-agent coordination
- tool calling
- a persistent cognitive state for agents
- reproducible reasoning
- and a perfect environment for scientific workflows
So our inspiration became:
What if we combine agentic reasoning, hierarchical manifold learning, and scientific RAG to build an automated neuroscientist capable of discovering structure in brain signals?
That became Aurora.
What We Built
Aurora is a SpoonOS-native AI4Science system that learns a hierarchical cognitive manifold from multimodal neural and biometric signals. It uses:
- a multimodal neural encoder
- a hierarchical clustering engine
- temporal coherence modeling
- logarithmic pruning
- a scientific retrieval layer
- and a coordinated team of SpoonOS agents
Together, these components allow the system to:
- Learn emergent latent cognitive states
- Annotate clusters using neuroscience literature
- Generate hypotheses about brain function
- Test those hypotheses against new data
- Produce interpretable scientific reports
The system effectively acts as an AI neuroscientist, automatically exploring and explaining the geometry of cognition.
Technical Foundations
## 1. Multi-Modal Neural Embeddings
We designed an encoder that fuses:
- EEG-like time series
- EMG and heart-rate variability
- GSR
- eye-tracking coordinates
- behavioral labels
To generate embeddings ( z_t \in \mathbb{R}^D ), we used:
- 1D convolutions for temporal locality
- transformers for long-range dependencies
- cross-modality attention for fusion
The result: smooth cognitive trajectories in embedding space.
## 2. Hierarchical Manifold Engine
We constructed a dynamic manifold:
[ \mathcal{T} = {C_1, C_2, \dots} ]
where each cluster (C_i) recursively splits into finer subclusters based on distance metrics:
[ d(z, \mu_{C_i}) < \tau ]
We implemented:
- recursive subclustering
- prototype reallocation
- cluster birth/death rules
- manifold refinement based on new data
- temporal transition penalties
This allows the system to uncover structure across coarse → fine levels of cognition.
## 3. Temporal Coherence
Cognitive dynamics obey approximate smoothness:
[ p(s_t \mid s_{t-1}) \propto \exp(-\alpha \cdot \text{dist}(s_t, s_{t-1})) ]
We modeled this with a hybrid:
- HMM-style transition prior
- short-window smoothing
- dynamic transition constraints
This dramatically stabilizes decoded trajectories.
## 4. Logarithmic Pruning
Instead of brute-force state search:
- compute cluster likelihoods
- prune improbable branches
- recursively refine promising paths
This reduces (N) states to ~( \log N ) effective candidates each step.
## 5. Scientific RAG Layer
This was one of the most transformative components.
When the system discovers a cluster, it retrieves:
- neuroscience papers (PubMed, arXiv)
- EEG frequency band references
- BCI competition literature
- cognitive task documentation
A cluster like:
[ C_5 = { \text{high gamma}, \text{low HRV}, \text{frontal dominance} } ]
becomes:
“Matches frontal gamma associated with cognitive load; similar to findings in Smith et al., 2021.”
This makes the manifold interpretable and grounded in science.
SpoonOS Agents — The Core of the System
We built four collaborating agents on SpoonOS:
### 1. AcquisitionAgent
- Streams signals
- Applies preprocessing steps retrieved from RAG
- Sends embeddings to manifold engine
### 2. ManifoldAgent
- Builds and updates the hierarchical cognitive tree
- Manages pruning
- Enforces temporal coherence
### 3. DecoderAgent
- Computes posterior state
- Maps states to known neuroscientific signatures
### 4. NeuroscientistAgent
- Runs scientific RAG queries
- Proposes hypotheses
- Designs experiments
- Generates human-readable scientific reports
SpoonOS made this possible by giving us:
- persistent agent memory
- deterministic tool chains
- clean, isolated reasoning
- the ability for agents to call each other's tools
- real-time UI hooks
This is where our project stands out most.
Frontend & UI (with TRAE)
We built our UI using TRAE, which let us bind agent tools directly to the interface without manually managing state. The manifold visualization, cluster annotations, and agent logs update automatically as agents reason — the UI literally reflects the cognition of the system.
This allowed:
- real-time embedding animations
- hierarchical tree visualizations
- pruning timelines
- RAG annotations appearing live as literature is retrieved
- auto-generated experiment reports
TRAE made the frontend feel alive.
How We Built It
We divided the work into four main pipelines:
- Modeling pipeline
- encoder training
- clustering
- pruning heuristics
- transition modeling
- RAG pipeline
- embedding retrieval
- chunking neuroscience texts
- cluster → literature mapping
- SpoonOS agent ecosystem
- tool definitions
- agent roles
- inter-agent messaging
- Frontend pipeline
- TRAE reactive components
- manifold explorer
- agent dashboards
Challenges We Faced
1. Balancing temporal coherence with cluster flexibility
Too strict → system gets stuck Too loose → system jumps between states We had to develop a hybrid transition model.
2. Building useful scientific RAG
Neuroscience literature is noisy. We created relevance filters and cluster→keyword mapping.
3. Getting multiple agents to cooperate cleanly
SpoonOS made tool definition easier, but designing agent roles + responsibilities took iteration.
4. UI reactivity
With TRAE, we had to design clean tool outputs so UI updates remained stable and fast.
What We Learned
- Scientific RAG dramatically enhances interpretability
- Hierarchical manifolds reveal structure flat classifiers miss entirely
- SpoonOS is extremely powerful for scientific workflows
- Agents + tools → the closest thing to an AI scientist
- TRAE lets you build reactive scientific UIs incredibly quickly
- Cognitive manifolds can be discovered in real time
- AI4Science requires data-driven learning and knowledge retrieval
Aurora transforms raw neural signals into an interpretable cognitive manifold, enriched by scientific retrieval, and operated by a team of coordinated SpoonOS agents. It is both a scientific tool and a new paradigm: an automated neuroscientist capable of discovery, interpretation, and reasoning.
We believe this project showcases what is possible when multi-agent intelligence, scientific retrieval, hierarchical modeling, and real-time visualization come together inside SpoonOS.
Built With
- agent
- arxiv
- cytoscape.js
- d3.js
- faiss
- hdbscan
- hmm
- javascript
- json
- k-means++
- kalman
- multi-agent
- next.js
- numpy
- orchestration
- plotly
- pubmed
- python
- pytorch
- react
- scikit-learn
- scipy
- spoonos
- sqlite
- t-sne
- trae
- typescript
- umap
Log in or sign up for Devpost to join the conversation.