We incorprated commercial models Vokaturi, Empath, DeepAffects
And 6 bias recognition models
- Record your voice up to 3 seconds by clicking the microphone button.
- Check detection results on following table
- Hover your cursor over the statistical image to view zoomed results.
- Results table
| Vokaturi | Empath | DeepAffects |
|---|---|---|
| 😡 Angry | 😐 Calm | 😡 Anger |
| 😒 Disgust | 😉 Joy | 😐 Neutral |
| 😱 Fear | 😡 Anger | 😍 Excited |
| 😝 Happy | 😖 Sorrow | 😖 Frustration |
| 😐 Neutral | 😁 Energy | 😝 Happy |
| 😭 Sad |
- Requirement list:
- Python
- Miniconda
- Pydub
- FFmpeg
- Requests
- Pytorch
- Flask
- Urllib3
- Urllib3
-
Check your
modelsdirectory whether it is exists. -
Clone project and into
webfolder, run:
$ python server.pyAccess web application via https://127.0.0.1:5000 as default
You can modify the config.json in order to make sure the server running properly.
"API": {
"empath": "...",
"deepaffect": "..."
}If you running server.py on arm64 device, you can change the lib to:
"Lib": "lib/lib_arm64.so"And model path:
"model_path": [{"acted_cnn_lstm" : "models/acted/cnn_lstm/"},
{"acted_cnn_lstm_attention" : "models/acted/cnn_lstm_attention/"},
{"acted_cnn_lstm_attention_multitask" : "models/acted/cnn_lstm_attention_multitask/"},
{"observed_cnn_lstm" : "models/observed/cnn_lstm/"},
{"observed_cnn_lstm_attention" : "models/observed/cnn_lstm_attention/"},
{"observed_cnn_lstm_attention_multitask" : "models/observed/cnn_lstm_attention_multitask/"}
]Finally, you can replace your own API keys by modifying config.json and change .perm files.
Thanks to our Team 1 IS4152/IS5452, NUS
