Testing and quality control is a major consideration for all projects. As such, multiple redundant testing/QC processes are built into our standard process for all code development, as described in the workflow documents provided in this document. All testing is risk-based, so more complex updates are subject to more rigorous testing.
This page provides a brief overview of our different testing / QC techniques.
We consider detailed documentation to be a crucial first step in the quality control process. The following are created before a version 1.0 release for all repositories, and maintained throughout the project.
Detailed documentation of each project's API/configuration is maintained. In most cases, we use a machine-readable settings-schema.json file (example) that is then used to human-readable wiki pages (example).
We document the expected functionality of our tools as part of the project wiki. As functionality evolves, the wiki page is updated (example)
When appropriate a stand-alone data specification page is maintained in the project wiki (example).
Whenever possible, our projects have one or more reproducible examples, which are automatically deployed using github pages whenever the repository is updated. (example code, hosted page)
Each file has a README.md file that gives brief example, describes intended use and provides a 'hello world' style example.
Longer write-ups describing the use of the project.
Formal description of project dependencies, version etc. (Details vary by programming language / framework)
We use a variety of testing methods across our projects.
A user (often called a "tester") will confirm that specific functionality is behaving as expected. Testing instructions (or just "tests") are included as part of an issue. Issues numbers are referenced in both code commits and pull requests to help with traceability. Testers record the test result in pull requests using the github review functionality.
Testing that occurs immediately before a new version is released. Results are regression testing is recorded in "Development PRs" using GitHub's review functionality. We keep a running list of regression tests in the project wiki (example).
Programmatic testing that typically occurs whenever code is committed to a project.
Developers review other developers' code (typically in feature branches). Results of code reviews are recorded in Feature PRs using GitHub's review functionality.
CAT is a home-grown testing environment for testing our interactive graphics.
We use TravisCI as a continuous integration framework on several projects.
We don't like arguing about tabs vs. spaces, so we use linters and code formatters when possible. The prettier.js package is worth a special mention; it's great.
Finally, a note on software validation. Much of our work is used in the conduct of clinical trials, so there are several relevant regulations that we are well aware of (but are beyond the scope of this page). All projects undergo a full regulatory review before being approved, and the testing processes and workflows described in this document are designed to be trackable, exportable and archivable to support usage in regulatory documentation when appropriate.