Inspiration
Music creation is often split between technical tools and creative intuition.
We wanted to close that gap by building a platform where anyone can describe a vibe in natural language and instantly experiment with AI-generated music, while still giving developers transparency, control, and verifiability.
The project was inspired by the idea that AI, data, and blockchain can work together to make creative tools more accessible, auditable, and fun to experiment with.
What it does
Vibe Composer is a web-based creative playground that lets users:
Describe a musical idea or vibe in plain English
Generate structured music parameters and MIDI compositions using AI
Chat with Gradient-hosted LLaMA models for creative exploration
Interact with local LLMs for private, offline experimentation
Optionally anchor creative events on Solana for transparency and provenance
Store and analyze generation data using modern data infrastructure
The platform combines AI-driven creativity, developer tooling, and data observability in a single, easy-to-use interface.
Tech Stack Diagram

How we built it
We built Vibe Composer using a modular, full-stack architecture:
Frontend: Next.js (App Router) with React and Tailwind CSS for a fast, responsive UI
AI & Backend: Python FastAPI services that connect to DigitalOcean Gradient AI and local Docker-based LLM runners
DigitalOcean Gradient: LLaMA models for prompt understanding, chat, and music-parameter generation
Music Engine: Python-based MIDI generation pipeline driven by AI-produced parameters
MongoDB: Primary database for chat history, prompts, and generated metadata
Snowflake API: SQL-based analytics and querying of application events
Solana Devnet: Lightweight on-chain records for verifiable creative events and proofs of activity
This approach allowed us to rapidly prototype, iterate, and demonstrate how AI, data platforms, and blockchain can come together in a practical creative application.
MLH - Use of Solana
What we used Solana for
We used Solana Devnet to power a lightweight reward/payment flow tied to user-created content. When a user saves a generated song’s metadata, the app can trigger a 0.01 SOL reward transaction to the user’s connected wallet address.


How it works (end-to-end)
The user connects a Solana wallet (e.g., Phantom) in our Next.js frontend.
When the user clicks “Reward 0.01 SOL”, the frontend calls our backend endpoint:
POST /reward/send
The backend submits a Solana transaction that sends SOL from a server-funded Devnet wallet to the user’s wallet address.
The backend returns the transaction signature, which the frontend displays with a link to Solana Explorer.
We enforce idempotency using an idempotency_key so the same song/save action can’t be rewarded multiple times accidentally.
Why we designed it this way
Some networks (like school Wi-Fi) can block direct Solana RPC calls from the browser. To make rewards reliable, we moved the chain interaction to a server-side relayer (hosted on Render). The frontend only talks to our API over normal HTTPS, and the backend handles broadcasting and confirmation via Solana RPC.
What users see
A Reward button in the app
A confirmation message: “Reward sent ✅” or “Already rewarded ✅”
We can check the transaction with solscan with giver wallet
MLH - Use of Digital Ocean
DigitalOcean Gradient AI powers the core generative intelligence of the platform. We use Gradient-hosted LLaMA models to:
Convert natural-language prompts into structured parameters
Generate AI chat responses for music composition and experimentation
Provide fast, scalable inference through a managed GPU-backed service
This allows users to interact with powerful LLMs without managing infrastructure
MLH - Use of Snowflake API
Snowflake is used as an analytics and event-query layer.


The Snowflake API enables:
Querying structured event data generated by the application
Running SQL-based analysis on user interactions and generation metadata
Demonstrating how AI applications can integrate with modern data warehouses for insights, reporting, and observability
This highlights Snowflake’s role in AI + data-driven applications.
MLH - Use of MongoDB
MongoDB is used as the primary application database.

It stores:
Chat histories and AI-generated outputs
User-generated prompts and composition metadata
Flexible JSON-based records that evolve as the app grows
MongoDB’s schema flexibility makes it ideal for rapidly iterating on AI-driven products where data structures change frequently.
MLH - Use of ARM
🧠 Local LLM on ARM (Docker Model Runner)




Arm Learning Path: https://learn.arm.com/learning-paths/laptops-and-desktops/docker-models/models/
Apple Sillicon M2 is Architecture: aarch64 ARM 64 Chip. We decided to use the above learning path to utilize ai/smollm2
Our application runs its core AI logic locally on Arm hardware using Docker Model Runner. We deploy a lightweight Large Language Model (SmolLM2) that performs on-device inference through an API, which is consumed directly by our Next.js application.
The system runs on Arm64 (aarch64) architecture, verified at runtime, and leverages Docker Model Runner’s integration with llama.cpp, which is optimized for efficient CPU-based inference on Arm processors. This allows us to run AI workloads without GPUs or cloud-hosted inference, reducing latency, cost, and external dependencies.
By executing the LLM entirely on Arm hardware, Maze-Runner demonstrates a practical edge-AI workflow that aligns with Arm’s vision for energy-efficient, privacy-preserving, and portable AI systems. This approach enables the application to function offline, scale across Arm-based laptops, servers, and edge devices, and remain lightweight enough for real-time interaction.
Challenges we ran into
One of the biggest challenges was integrating Solana Devnet from a restricted network environment. Our campus network (EDUROAM/WPI) blocks Solana RPC traffic, which forced us to rethink our architecture and deploy a dedicated Solana backend server separately. While unexpected, this ultimately improved our system design by cleanly isolating blockchain responsibilities.
Another major challenge was working across rapidly evolving Solana Python SDKs (solana-py + solders). Many APIs are strongly typed and differ across versions, leading to subtle runtime errors (e.g., blockhash and signature type mismatches). Debugging these issues required deep inspection of RPC responses and adapting our code to be version-robust.
We also faced challenges extracting on-chain memo data reliably. The transaction response object structure varies depending on encoding and SDK versions, so we had to implement a resilient, log-based extraction method to ensure metadata could always be retrieved and displayed in the web app.
Finally, orchestrating multiple AI systems (local SmallLM2, DigitalOcean Gradient AI with Llama 3.8, MongoDB context storage, and Snowflake metadata storage) required careful separation of concerns to avoid latency, cost overruns, and prompt inconsistencies.
Accomplishments that we're proud of
Successfully built a full end-to-end pipeline:
Compose music in the browser
Convert prompts to structured parameters with AI
Generate music
Publish verifiable metadata on Solana Devnet
Retrieve and display on-chain data back in the web app
Implemented on-chain proof of authorship using the Solana Memo Program, allowing anyone to verify song metadata using a transaction signature.
Designed a modular, multi-backend architecture with FastAPI services specialized for:
Music generation
AI orchestration and data storage
Blockchain publishing and rewards
Integrated Phantom Wallet for Solana Devnet identity while keeping private keys securely off the frontend.
Built a cost-aware AI system that combines:
Local SmallLM2 for fast, contextual chat
What we learned
Blockchain SDKs are not static: real-world Solana development requires handling strict typing, version mismatches, and evolving APIs—especially in Python.
On-chain data should be minimal: storing hashes and metadata references (rather than full content) is both scalable and aligned with best practices.
Separation of concerns pays off: splitting AI, music generation, storage, and blockchain logic into distinct services made the system easier to debug, deploy, and extend.
Hybrid AI architectures work: combining local models with cloud-based LLMs gives better performance and cost control than relying on a single model.
Network constraints matter: building production-like systems means accounting for real deployment limitations, not just ideal local setups.
What's next for vibe-composer
🎵 Mint Song NFTs on Solana Devnet (and eventually mainnet) so songs appear directly in users’ wallets.
🔐 Move to client-side Phantom signing, eliminating backend private keys entirely.
🌐 Build a public Song Explorer that lets anyone browse, play, and verify published compositions.
🎛️ Expand the composer with more controls (instruments, chord progressions, MIDI export).
📊 Use Snowflake analytics to surface trends like most-used styles, tempos, and keys.
🔗 Add on-chain hash verification to prove audio file integrity over time.
Built With
- arm
- digitalocean
- fastapi
- llama
- mongodb
- nextjs
- python
- smollm2
- snowflake
- solana

Log in or sign up for Devpost to join the conversation.