Skip to content

Latest commit

 

History

History

README.md

Speech-Recognition-Bias Web Application

We incorprated commercial models Vokaturi, Empath, DeepAffects

And 6 bias recognition models

Preview Screeshot

alt preview

Usage

  1. Record your voice up to 3 seconds by clicking the microphone button.
  2. Check detection results on following table
  3. Hover your cursor over the statistical image to view zoomed results.
  4. Results table
Vokaturi Empath DeepAffects
😡 Angry 😐 Calm 😡 Anger
😒 Disgust 😉 Joy 😐 Neutral
😱 Fear 😡 Anger 😍 Excited
😝 Happy 😖 Sorrow 😖 Frustration
😐 Neutral 😁 Energy 😝 Happy
😭 Sad

Deployment

  1. Requirement list:
  • Python
  • Miniconda
  • Pydub
  • FFmpeg
  • Requests
  • Pytorch
  • Flask
  • Urllib3
  • Urllib3
  1. Check your models directory whether it is exists.

  2. Clone project and into web folder, run:

$ python server.py

Access web application via https://127.0.0.1:5000 as default

Configurations

You can modify the config.json in order to make sure the server running properly.

  "API": {
    "empath": "...",
    "deepaffect": "..."
  }

If you running server.py on arm64 device, you can change the lib to:

"Lib": "lib/lib_arm64.so"

And model path:

 "model_path": [{"acted_cnn_lstm" : "models/acted/cnn_lstm/"},
         {"acted_cnn_lstm_attention" : "models/acted/cnn_lstm_attention/"},
         {"acted_cnn_lstm_attention_multitask" : "models/acted/cnn_lstm_attention_multitask/"},
         {"observed_cnn_lstm" : "models/observed/cnn_lstm/"},
         {"observed_cnn_lstm_attention" : "models/observed/cnn_lstm_attention/"},
         {"observed_cnn_lstm_attention_multitask" : "models/observed/cnn_lstm_attention_multitask/"}
 ]

Finally, you can replace your own API keys by modifying config.json and change .perm files.

Ackownledgement

Thanks to our Team 1 IS4152/IS5452, NUS