Inspiration
I am going through a crisis time at the moment. Everyone is very kind and asking me 'How are you?'
I get to choose whether I am going to lie and say I am ok, or that embarrassing moment when you say things aren't so great and errrrmmmmm.......
People think that because you are smiling things are fine now. But they inevitably are not.
I have been considering how hard it is as an adult to name your feelings and even harder to explain to loved ones what is going on.
I want to start a piece of living art that will break down the barriers between the inner and outer me. My artwork will be constantly driven by my body.
With this in mind I have dipped my toe in to the word of Emotion Detection.
What it does
The website locates a face in the video stream and returns a probability of the expression being neutral, happy, surprised, angry, sad, fearful, disgusted or surprised. It then uses this probability to generate an animation.
In addition Tristan has completed a project that will complement mine by hacking a SmartPhone BCI to interact with physical tech.
How I built it
I used a fantastic API from a user called Just A Dude Who Hacks link. I have been aware lately that I definitely need to spend more time researching machine learning. With this being a big ask in 24 hours JADWHacks enabled me to develop a proof of concept prior to this learning.
This in addition to my previous knowledge of css and javescript resulted in the artefact that I am submitting.
Tristan has recorded a short video describing what he did.
Challenges I ran into
Deciding how to display the Art took up a large part of my day. I was considering a range of tools. I settles on css animation because it offered the quick integration with the FaceAPI and would have easily manipulated objects.
I have no prior experience of node.js. I am impressed with how quick it is to pick up and will definitely use it in the future. But this gave me a steep learning curve to implement an api that I was unfamiliar with on a platform that I am also unfamiliar with.
Finding out where to access the probability variables took longer than anticipated. Because they are displayed as part of a canvas simply retrieving them from the DOM was not a possibility.
All of this limited how creative I could be with the output but I think as a proof of concept it is reasonable.
Accomplishments that I'm proud of
Completing a working Website
What I learned
Node.js
JADWHacks/Face-API
CSS Animations
What's next for Cre8iveMe
I am not happy with the quality of the probabilities and wish to use a greater range of data. The issue with image processing job is that it is feeding back to the world the external face not the feelings behind the face. In order to break down the barriers to the inner me I need to retrieve internal data. To do this i will be investing in a NeuroSky MindWave Mobile which returns a wide range of data. NeuroSky also provide a range of Developer tools.
I then need to set about training my machine to be aware of colours, textures, images and maybe sounds that I feel reflect my emotional state at given points in time. After a period of learning I will be able to see how well the machine can these reflect back to me.
This data will then be used in a series of art works, installations and attire that will reflect the inner me. Tristan has shown how brain wave data can be used to control wearable tech,its the future very Gen Z.
It can even be used to personalise the inside of your Bently. To calm you if you are feeing stressed or wake you up in the morning if you are sleepy.
Going forward I see applications for others. Offering an alternative method of expressing ones feelings or acknowledging ones feelings. Mental Health is a growing issue in our society. If we can start to see how each other are really feeling then we can start to be more compassionate towards each other.
I hope will be helpful for others like myself (maybe not as intrusive as my full proposal).
Log in or sign up for Devpost to join the conversation.