Inspiration

The Internet is filled with inappropriate content for younger audiences, ranging from suggestive content to more explicit graphics. How can we rid the Internet of these?

What it does

This prototype is supposed to hijack HTTPS connections, using a root CA certificate, trusted by the client computer. Whenever an image is sent, the request is blocked, and input into a convolutional neural network, which determines how explicit the image's contents are. If the contents are too explicit, the image will be discarded, and instead, an image of a red 'x' of the same file type will be substituted.

How we built it

We used mitmproxy, an open source hacking tool, to hijack the SSL/TLS connections. Mitmproxy was supposed to download images and send them to a neural network, implemented in Keras.

Challenges we ran into

PaddlePaddle, Baidu's neural network toolkit, was our first choice of a neural network engine. However, due to the sparse nature of the documentation, and the limited available time, we had to switch to Keras + TensorFlow last minute. Unfortunately, we were unable to get the model to run due to cuDNN issues. It is likely that our graphics hardware was running out of RAM from the model size. In order to debug the neural network, we had to hold back on figuring out the Python 3 API for mitmproxy. Also, half of our team failed to show up.

Accomplishments that we're proud of

We worked on our deep learning skills, and learned methods on how to bypass security measures applied by SSL and TLS (HTTPS). We also worked on our teamwork.

What's next for Scrubby Dubby

Scrubby Dubby needs to be fixed up, made functional, then trained with a proper database of different images, ranging from clean to extreme graphical nudity.

Built With

Share this project:

Updates