Inspiration
We wanted to build something that combines social impact, T-Mobile’s 5G innovation, and voice-driven AI like a smart IoT assistant. We wanted to create a smart assistant to elevate customer experience and enhance T-Mobile's product.
What it does
T-Mobile TIM is a 5G-powered IoT companion that monitors network health, tracks connected devices, and provides real-time insights on bandwidth usage. It also features an SLM loaded into the Raspberry Pi that is local to it to diagnose and troubleshoot the network to help the user get back online.
How we built it
We built TIM, an offline-capable network assistant for the Raspberry Pi 5 that listens for a custom wake-word (“Wake Up Tim”), transcribes user speech locally, and runs automated network diagnostics before generating context-aware answers through a lightweight AI model. TIM integrates Picovoice Porcupine, Cheetah, and Orca for wake-word detection, speech-to-text, and text-to-speech. It then queries a local Flask API hosting our custom model (network-assistant), which analyzes live connectivity metrics: ping, signal strength, IP configuration, and gives spoken troubleshooting advice. The system operates fully offline when cloud access is unavailable.
Challenges we ran into
We faced several challenges, including limited compute power on the Raspberry Pi, which restricted large-model inference. During development, we encountered integration issues when pivoting from AWS Bedrock to CrewAI to achieve faster agent coordination. Midway through the hackathon, switching rooms caused our router to disconnect from the mobile hotspot, preventing us from connecting to Wi-Fi and temporarily halting progress. We also planned to introduce parental control features, but bandwidth throttling was unavailable since we lacked admin access to the router, leading us to focus on the monitoring aspect instead. Despite these setbacks, we still managed to build a fully functional IoT simulation within just 24 hours, pushing our teamwork and problem-solving skills to the limit.
Accomplishments that we're proud of
We’re proud of successfully integrating a custom wake-word that activates our assistant entirely on-device, as well as deploying a small language model (SLM) directly onto the Raspberry Pi for offline inference. These features allowed TIM to operate independently of cloud connectivity, showcasing our ability to build a responsive, privacy-preserving AI system on limited hardware.
What we learned
We learned how to integrate multiple on-device AI pipelines under hardware limits of a Raspberry Pi 5, manage audio stream resampling for real-time STT and wake-word detection, and create a fallback workflow between online and offline inference. We also gained experience debugging network subprocesses (ping, iwconfig, hostname -I), handling thread synchronization for audio events, and designing low-latency TTS feedback loops. The project deepened our understanding of embedded speech systems, edge inference, and hybrid AI architectures that bridge local and cloud processing.
What's next for TIM?
Next, we plan to fully integrate our CrewAI agents directly onto the Raspberry Pi, creating a seamless fallback system that allows TIM to operate autonomously when cloud access is unavailable. This will make the assistant more resilient, enabling continuous performance in both online and offline environments.
Built With
- cheetah
- ollama
- openai
- orca
- picovoice
- porcupine
- python
- raspberry-pi
- slm

Log in or sign up for Devpost to join the conversation.