Inspiration

Our link shortner sports INSANE performance destined to remind you of the good old times when software was simple and fast. We do not waste time with things like a frontend, our project is nothing but good old fashioned HTTP requests.

What it does

Our link shortener runs a cluster of highly optimized containers to provide link shortening services at massive scale.

How we built it

A key part of our build was our bulletproof CI/CD pipeline. All pushes to main are automatically deployed to our endpoint allowing us to ship code effortlessly. All pushes are also tested against a pytest suite, if any tests fail then we will skip deployment ensuring we only deploy healthy builds.

We also have automatic alerting hooked up to our email accounts, so if our service ever goes down, we will find out within 60 seconds!

We used docker's container scaling with an Nginx load balancer to be able to very easily scale up and down our app instances by changing a single parameter when we run the app. We also added pgbouncer to get around PostgreSQL's weakness in handling many connections, and added Redis caching to our most important routes (like the "hotpath" of redirecting users) to reduce database load as much as possible.

Challenges we ran into

Configuring the CD pipeline was difficult. We needed to make sure the automated github actions server could safely SSH into our production server and run the correct commands to release a new build. Of particular concern was making sure we safely stored secrets like private SSH tokens.

Handling 500 concurrent users is hard. It's so much traffic we had to increase nginx's worker count to 8192, otherwise it would reject 5% of the connections before they even reached our server instances. Gunicorn, our web server, is not asynchronous, so if there weren't enough workers to handle the traffic or if the database got too overloaded, the response time would just keep going up. So adding caching was almost required to get it to run properly.

Accomplishments that we're proud of

The projects ability to remain online and catch bugs early through CI/CD is very cool. We increased the performance from around 20 requests per second with 50 users to 500 requests per second with 500 users. That's an incredible increase! We also found a way to split up the work very well, which let us complete almost all of the tasks.

What we learned

Learning about GithubActions was extremely cool! I had no idea GitHub offers free server runtime with git pushes. It is a super powerful service! We learnt so much about performance optimization with python servers and docker.

What's next for todo

Although we like simple things, adding a bit of a frontend would add a bit of charm.

Built With

Share this project:

Updates