Inspiration
The inspiration for this project stemmed from the desire to make AI truly experience qualia—bringing depth, authenticity, and a genuine connection to interactions with real users. We wanted to create more than just a functional AI; our goal was to develop a character that could form meaningful experiences with its audience, replicating human-like sensations in a virtual space.
Building the Project
We used Unity as our primary development platform, integrating NVIDIA AI Workbench to serve as the AI’s brain. By utilizing real-time Twitch API, we allowed our AI character to interact directly with live audiences, building an engaging and reactive experience. This combination of cutting-edge game development and AI modeling gave rise to a new level of virtual interaction that closely resembles real human connection.
Challenges We Faced
One of the major challenges was integrating NVIDIA AI Workbench into Unity—a process that required overcoming compatibility issues and finding innovative ways to ensure seamless communication between the two platforms. Yet, we made it possible, achieving an impressive level of integration that allowed for real-time AI responses within a live-streamed 3D environment.
Additionally, ensuring the AI character exhibited natural behaviors—such as responding with appropriate emotions and maintaining conversation context—required significant effort. We had to meticulously train the model to balance personality, memory, and emotional depth, ensuring the interaction felt real and engaging.
Accomplishments That We’re Proud Of
We have achieved several milestones in this project, each pushing the boundaries of what AI can accomplish in a real-time, interactive setting:
• Seamless Live Streaming with Full Interaction: One of our proudest accomplishments is conducting a full live stream where our AI character engaged solely with a real audience. This achievement required advanced synchronization of the Unity scene cameras with OBS, ensuring that the viewers experienced smooth transitions between camera perspectives, effectively immersing them in the virtual environment.
• Authentic AI Engagement: Our AI was able to replicate authentic human-like conversations in real time—handling not only basic responses but also adapting its personality and emotional state based on the interaction. The ability to manage complex emotional responses, remember user interactions, and apply those memories during subsequent interactions has been a monumental accomplishment.
• Dynamic Scene and Audience Interaction: Linking OBS cameras with Unity scene cameras allowed us to make the entire environment feel dynamic and responsive. The character not only conversed but also performed appropriate actions within the scene, adapting to context and making the audience feel they were interacting with a living entity.
• Advanced Memory Integration: Our AI didn’t just respond in isolation—it remembered users and adjusted its behavior accordingly. By integrating an adaptive memory model, we created an AI that learns and evolves, enhancing the quality of interactions and building familiarity with returning audience members.
• Technological Integration Mastery: The successful integration of NVIDIA AI Workbench, Unity, and Twitch represents a groundbreaking achievement in combining advanced AI systems with game engine environments. This achievement bridges the gap between AI research and practical, real-time application in entertainment.
What We Learned
Throughout this ambitious journey, we learned several valuable skills:
• Stack Development for Multiple Hosts: We explored and mastered the complexities of deploying a distributed architecture, ensuring our AI model seamlessly interacted with various components hosted across different systems.
• Game Development in Unity: We became proficient in using Unity to create engaging, dynamic 3D environments, merging interactive storytelling with technical depth.
• Replication of Human Simulacra Through Code: We delved deep into coding algorithms that could replicate human-like behaviors, emotions, and memories. This allowed us to bring our AI character to life, making it capable of responding authentically in real-time interactions.
• Advanced AI Integration: The process of integrating NVIDIA AI Workbench into Unity taught us how to fuse advanced machine learning models with game environments to create interactive, lifelike experiences.
What’s Next for TruSim
The journey of TruSim is far from over; our vision is to continually push the boundaries of AI simulacra until it becomes indistinguishable from human behavior:
• Making TruSim More Human: The next big step for TruSim is enhancing its human-like qualities to an even greater extent. We plan to focus on more nuanced personality traits, including the ability to detect and respond to sarcasm, humor, and subtle emotional cues. We want our AI to exhibit human-level spontaneity, adapt its conversational tone to the audience, and even convey empathy through its actions and words. Our goal is to make TruSim capable of a wide range of emotions, from excitement to calmness, depending on the context of the interaction.
• Emotional and Social Intelligence Expansion: We aim to expand the character’s social intelligence by allowing it to develop unique relationships with individuals over time, recognize returning viewers, and even call them by name. By deepening emotional intelligence, we want the AI to understand complex emotions, not just through words but by interpreting the context of interactions, body language, and even the tone of chat messages.
• Enhanced Real-World Integration: TruSim’s future will also involve adapting it to function across multiple platforms beyond Twitch—such as YouTube Live, Instagram Live, and even virtual reality environments. We envision TruSim not just as a live streamer but as an AI that can join virtual events, conferences, and even become an interactive guide in digital experiences.
• Physical Interaction and Augmented Reality: We want to bring TruSim out of the screen and into the real world via augmented reality (AR). By allowing users to project the AI character into their environment through AR devices, TruSim can interact physically with users’ surroundings, making the experience far more immersive and personal.
• Autonomous Learning and Self-Improvement: In the future, TruSim will be able to autonomously learn from every interaction, self-improving without manual updates. It will develop more refined responses, improve its dialogue flow, and even generate new behaviors—all while maintaining its core identity and personality.
• Expanding Interaction Beyond Text and Voice: Our ultimate goal is to allow TruSim to read and react to more sensory data, including facial recognition, gesture detection, and other biometrics. By expanding beyond text and voice, TruSim will become a fully immersive entity capable of responding to every nuance of human interaction, thereby setting the benchmark for human-like AI simulacra.




Log in or sign up for Devpost to join the conversation.