Author: Xavier M. Puspus
I used a BERT wrapper based off of this paper. I built a web application that generates questions and possible answers from plain text or from text parsed from a website.
I used the most recently released API of Streamlit to deploy the ml model and locally serve the web app.
In order to run the app, you must have text2text available on your machine, and install streamlit using:
foo@bar:~$ pip install streamlitAfterwards, cd into the directory of app.py and run this on the terminal:
foo@bar:~$ streamlit run app.pyThe web app should look something like this:
