Release Notes · v2.9.0 Published 7 May 2026 iOS 17+ · macOS 14+

Different tracks.
Same emotional space.

MeloTune 2.9.0 introduces Social Music Mesh — nearby devices couple their emotional state over a peer-to-peer mesh, so two listeners drift into the same feeling without ever hearing the same song. The on-device model now learns from how you listen, not what you tell it.

Version
2.9.0 · 26.05.07
Runtime
On-device LNN · CoreML
Transport
Bonjour · WebSocket relay
Data that leaves device
Mood field only · E2EE
§ 01 — Headline Feature

Social Music Mesh. Your music responds to their mood.

When a friend is nearby, your MeloTune and theirs become coupled agents on the same mesh. Mood and genre preferences converge in real time — you don't play the same songs, you share the same emotional trajectory.

Two phones on the same network. Each runs its own Liquid Neural Network with bimodal time constants: fast neurons track mood, slow neurons preserve taste. When the devices meet, SVAF evaluates each incoming Cognitive Memory Block per-field and decides — autonomously, on-device — what to remix into its own state.

The result: you naturally drift into the same genre as the person beside you, without either of you choosing. Different tracks, same feeling.

  • 01Automatic discovery. No pairing, no accounts, no setup. Devices find each other over Bonjour the moment the app opens.
  • 02Convergence in real time. Mood and genre fields couple continuously — the drift is gradual, never sudden.
  • 03Influence control. Three states: Off, Gentle, Responsive. Choose how much of the room gets to touch your music.
  • 04Focus mode. Blocks all peer influence. Your agent still observes the mesh, but stops remixing inbound state.
  • 05End-to-end encrypted. Peer-to-peer. No server. Only the 7-dimensional mood field is exchanged — never tracks, never history.
§ 02 — Learning

Music that learns you. No questionnaires.

MeloTune observes how you actually listen and uses that to shape what comes next. Every skip is a gradient. Every completion is confirmation. The model keeps a separate preference trajectory per genre and per time-of-day — morning rock feels different from evening rock, because it is.

01signal

Skips & completions

Implicit preference. A skip in the first 15 seconds weights differently than one at 02:40 — the model treats them as different gestures, not the same thumbs-down.

02signal

Volume & favorites

Turning it up is a signal. Turning it down is a signal. Favorites are the loudest signal — they bias the slow-neuron taste trajectory, not just the fast mood one.

03context

Per-genre, per-time-of-day

Your 7am jazz and your 11pm jazz are separate preference vectors. The same song, surfaced at the wrong hour, isn't the same song to the model.

04loop

Gets smarter every session

No manual tuning. No rating prompts. The Liquid Neural Network integrates every listening session into the per-agent αᶠ weights overnight.

§ 02.1 — Signal weighting
SignalFieldWeight (αᶠ)DecayNotes
Skip < 15smood, genre−0.42fastStrong negative on current trajectory; minimal taste impact.
Completionmood, arousal+0.28fastConfirms fit. Compounds if repeated.
Favoritetaste, genre+0.74slowBiases long-horizon preference vector.
Volume ↑arousal, intent+0.19fastProxy for immersion. Gated by context.
Volume ↓arousal, focus−0.16fastOften signals focus-seeking, not dislike.
Repeattaste+0.61slowSession-scoped. Higher weight when manually invoked.
§ 03 — Principle
Your mood is the only field that crosses all boundaries. Even when a peer's Cognitive Memory Block is rejected, the mood channel is always delivered.
§ 04 — Under the Hood

Quieter changes, measurable impact.

04.01 · curation

Smarter "Ready For You"

Now uses your preferred genre per time-of-day instead of a global average. Pre-curated the moment you open the app — typically sub-200ms on device.

04.02 · library

Improved playlist management

Reorder, merge, and duplicate without round-tripping through the provider. Offline edits reconcile on next connect.

04.03 · pipeline

Curation pipeline, faster

Performance improvements across the curation pipeline. SVAF evaluation is ~31% lower latency per CMB on A17 Pro.

§ 05 — Install

Get 2.9.0.

Available now on the App Store. Update from within MeloTune, or install fresh. The mesh activates automatically the moment a second device joins your WiFi.