Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

README.md

Quick Start

  1. Run Locust locally using the following command (you may not need sudo). You may need to add the --force-recreate flag if you are having trouble with your containers: docker compose --profile locust up
  2. Run the following data load management command in the environment to test against (optional additional command arguments as desired): ./manage.py gen_locust_load_test_data
  3. Open a browser to http://localhost:8089 and begin testing.
  4. Once you are done testing, you may wish to cleanup the load test data created in step 3 using the following management command: ./manage.py delete_committee_account <committee_account_id>

Locust Testing detailed instructions

Locust testing is used to simulate swarms of users making requests to an API service, allowing the testing of API performance under load. Locust tests are first set up by spinning up a docker container. The user can then visit http://localhost:8089/ to start a testing session.

The instructions for running tests with Locust follow:

(Optional) Prepare testing data with gen_locust_load_test_data command

A new devops command has been added to insert test data directly into the database. This command can be executed as follows using the test@test.com email. By default, the committees created will start with C33333333 and count upwards:

python manage.py gen_locust_load_test_data

Optional additional flags can be used to override various defaults:

  • --base_committee_number (Default 33333333)
  • --number_of_committees
  • --number_of_reports
  • --number_of_contacts
  • --number_of_transactions
  • --single_to_tiered_transaction_ratio

Setup - Additional steps for remote testing

  1. Set an additional environment variables:
  • Set the target API service for testing in docker-compose.yml:
  • As an example, this is what you would set in order to target DEV:
    • -f /mnt/locust/locust_run.py --master -H https://dev-api.fecfile.fec.gov
  1. Refresh the database:
  • As an example, here is blowing away and then recreating the RDS in the load testing mirror in DEV:
# create new db
cf create-service aws-rds small-psql-redundant load-fecfile-api-rds-NEW

# wait for create to complete before moving on
while true; do clear; cf services; sleep 30; done

# unbind old
cf unbind-service load-fecfile-scheduler load-fecfile-api-rds
cf unbind-service load-fecfile-web-api load-fecfile-api-rds
cf unbind-service load-fecfile-web-services load-fecfile-api-rds

# rename old and new
cf rename-service load-fecfile-api-rds load-fecfile-api-rds-OLD
cf rename-service load-fecfile-api-rds-NEW load-fecfile-api-rds

# bind new
cf bind-service load-fecfile-web-api load-fecfile-api-rds
cf bind-service load-fecfile-web-services load-fecfile-api-rds
cf bind-service load-fecfile-scheduler load-fecfile-api-rds

# delete old
cf delete-service load-fecfile-api-rds-OLD

Running Tests

  1. Run the command docker compose --profile locust up -d to spin up the testing environment
  • (Optional) Scale up using docker by adding --scale locust-follower=4 to the end
  • You also may need to use the --force-recreate flag
  1. Go to http://localhost:8089/ in your browser of choice to run tests.

  2. If testing locally, set Host to http://fecfile-api-proxy:8080

Recommended tests:

  • 5 users / 1 ramp-up rate
  • 100 users / 1 ramp-up rate
  • 500 users / 5 ramp-up rate

Advanced settings: Run time 5m

How our Locust Testing works under the hood

When you start a test run from your browser, the testing that follows takes part in two phases: Setup and Tasks. Setup runs once at the start of each thread, and then Tasks run on loop for the duration of the testing (if specified).

Setup

The setup for each thread consists of retrieving a list of report_ids to be used throughout the testing tasks. It also selects a subset of these report ids to submit depending on the number of threads (users) configured for the test run. It then prepares these report ids for submission (which takes place as in a load testing task)

Tasks

The Task phase consists of swarm followers running functions tagged with the @task decorator on-loop for the duration of the testing session. There are (as of writing) five tasks:

  • Celery Test
  • Load Contacts
  • Load Reports
  • Load Transactions
  • Submit report

Silk Profiling

In addition to load testing, Silk query profiling can be installed to inspect queries and response times.

To run silk testing, export INCLUDE_SILK=True on your local environment.

Once set up, silk profiling will run automatically as the API receives and processes requests. To view the results, visit the API's /silk endpoint (for local development: localhost:8080/silk/)

If setting up from scratch or looking for usage instructions, you can find documentation here.