Inspiration + what it does

This project is inspired by neuro-Sama and Jarvis from iron-man. What I found as a issue is the tendency for students to over rely on artificial intelligence. I think the psychological reason is well once you write the entire work since it generate everything for you there is a lack of tendency to want to use it to learn or rewrite the code. I experience it myself as well. Hence I developed the idea for Synz, there is 2 primary focus to it. The actual LLM that controls its code logic and its ability to speak and its ability to read code you write every time you save the file and give advice as well as a vtuber model. The latter was added since I felt like vtubers or just 3d models in general create a more humane side to it like a proper assistent more then just with a ai voice. Currently it has code monitor working you can have conversation with it and a working model, additionally you could as it about code and it can talk to it.

How we built it

I built it with the help of antigravity, I planned and wrote pseudo code on google docs and on paper, and then feed it into antigravity, instead of Ove reliant on antigravity to program it, i made it give me links as well as guides for me or some of my less technical friend to help program if I am busy with exams or school work. I planned it to have 3 separate function, the brain that control all the logic, the ears which listen to speech to text, then the face which it display the replies then sent the audio file to unity for it to play. and the 4th function which i wrote by myself with c# the logic in unity for information to be sent and also for the brain to control the 3d model. Additionally with used cmake and llama 3.1 8B for the logic. We also trained a small transformer to attach to the Llama 3.1 in order for the model to talk a certain way and have certain personality

Challenges we ran into

we ran into a multitude of challenge, most of all during the proccess is its ability to speak, and how unity and the python script doesn't communicate and often time there was echoes in which the model only replied what I said. After changing and fixing the logic it instead print the reply from the brain to the face and the face make the most recently reply into a mp3 that get sent to unity to play and that loops happen over and over again with every new reply. The issue is its slow but its a good temporary. Finally is the ability to zip it. As you can see in github the there is a drive link to the project's zip file given its 7GB. However there is many times in which it work on my computer but not my teammates or my laptop, only working on my PC. We troubleshooted this because the path for the .exe file to run all the terminal and unity is reading certain path in the folder hence for it to work it should be in the one drive documents folder and after cloning the github making a dist as a temporary fix. We are actively attempting to fix it so it work as a app no matter where its extracted to.

Accomplishments that we're proud of

we are very proud of its ability to move its model and speak to the user as well as to give advice in a very sarcastic and hilarious tone as well as it being working and can help and give advice on code

What we learned

we learned a lot, such as transformer model, how to implement speech to text and text to speech, ports and how to communicate from one to another.

What's next for Synz

for the future implementation we wish to actually pay for our own vtuber model and not use the current voice instead switching to one native to our own but still female. Additionally I want it to work with math and physics problems where you can write problems on one note or tablet and once you make a mistake it can notice and tell you, oh you forgot a + or - or its positive, or for example your doing integrals to include a +c. as well as improving its ability to talk and have personality.

Built With

Share this project:

Updates