MeloTune 2.9.0 introduces Social Music Mesh — nearby devices couple their emotional state over a peer-to-peer mesh, so two listeners drift into the same feeling without ever hearing the same song. The on-device model now learns from how you listen, not what you tell it.
When a friend is nearby, your MeloTune and theirs become coupled agents on the same mesh. Mood and genre preferences converge in real time — you don't play the same songs, you share the same emotional trajectory.
Two phones on the same network. Each runs its own Liquid Neural Network with bimodal time constants: fast neurons track mood, slow neurons preserve taste. When the devices meet, SVAF evaluates each incoming Cognitive Memory Block per-field and decides — autonomously, on-device — what to remix into its own state.
The result: you naturally drift into the same genre as the person beside you, without either of you choosing. Different tracks, same feeling.
MeloTune observes how you actually listen and uses that to shape what comes next. Every skip is a gradient. Every completion is confirmation. The model keeps a separate preference trajectory per genre and per time-of-day — morning rock feels different from evening rock, because it is.
Implicit preference. A skip in the first 15 seconds weights differently than one at 02:40 — the model treats them as different gestures, not the same thumbs-down.
Turning it up is a signal. Turning it down is a signal. Favorites are the loudest signal — they bias the slow-neuron taste trajectory, not just the fast mood one.
Your 7am jazz and your 11pm jazz are separate preference vectors. The same song, surfaced at the wrong hour, isn't the same song to the model.
No manual tuning. No rating prompts. The Liquid Neural Network integrates every listening session into the per-agent αᶠ weights overnight.
| Signal | Field | Weight (αᶠ) | Decay | Notes |
|---|---|---|---|---|
| Skip < 15s | mood, genre | −0.42 | fast | Strong negative on current trajectory; minimal taste impact. |
| Completion | mood, arousal | +0.28 | fast | Confirms fit. Compounds if repeated. |
| Favorite | taste, genre | +0.74 | slow | Biases long-horizon preference vector. |
| Volume ↑ | arousal, intent | +0.19 | fast | Proxy for immersion. Gated by context. |
| Volume ↓ | arousal, focus | −0.16 | fast | Often signals focus-seeking, not dislike. |
| Repeat | taste | +0.61 | slow | Session-scoped. Higher weight when manually invoked. |
Now uses your preferred genre per time-of-day instead of a global average. Pre-curated the moment you open the app — typically sub-200ms on device.
Reorder, merge, and duplicate without round-tripping through the provider. Offline edits reconcile on next connect.
Performance improvements across the curation pipeline. SVAF evaluation is ~31% lower latency per CMB on A17 Pro.
Available now on the App Store. Update from within MeloTune, or install fresh. The mesh activates automatically the moment a second device joins your WiFi.