Jump to Category
| Fundamentals & Theory | Test Design & Patterns |
| Mocking & Test Doubles | Integration Testing Strategies |
| ✨ Advanced & Modern Concepts |
Fundamentals & Theory
1. What is the Test Pyramid and what does it advocate?
The Test Pyramid is a metaphor that describes a strategy for structuring your automated test suite. It advocates for having a large base of fast, isolated **Unit Tests**, a smaller middle layer of **Integration Tests**, and a very small top layer of slow, brittle **End-to-End (E2E) UI Tests**.
The key principle is to push tests as far down the pyramid as possible. If a behavior can be verified with a unit test, it should be. This results in a test suite that is fast, reliable, and provides quick feedback, as most tests are small and have no external dependencies.
Read Martin Fowler’s article on the Test Pyramid.2. Differentiate between a Unit Test and an Integration Test with a clear example.
- A **Unit Test** verifies a single, small “unit” of code (like a method or a class) in complete isolation from its dependencies. All external dependencies (like databases, file systems, or other services) are replaced with test doubles (mocks or stubs). Example: Testing a `Calculator` class’s `add(a, b)` method to ensure it returns `a + b`.
- An **Integration Test** verifies that multiple components of a system work together correctly. It tests the integration points between units. Example: Testing that an API controller, when it receives a request, correctly calls a service class, which in turn successfully saves data to a real (or in-memory) database.
3. What is the difference between “sociable” and “solitary” unit tests?
This distinction describes how a unit test treats its internal dependencies (classes within the same service).
- Solitary Unit Tests:** Adhere to a strict definition where the “unit” is a single class. All of its collaborators, even other classes within the same application, are replaced with test doubles. This provides extreme isolation.
- Sociable Unit Tests:** Test a unit as a cluster of related classes that work together to achieve a behavior. For example, a test for a `UserService` might use a real `User` entity object. Dependencies outside this cluster (like a database repository) are still mocked. This often results in more realistic and less brittle tests.
4. What are the properties of a good unit test, according to the FIRST principles?
FIRST is an acronym for:
- Fast: Unit tests should run very quickly so developers can run them frequently without interrupting their workflow.
- Independent/Isolated: Tests should not depend on each other. Their execution order should not matter, and one test’s failure should not cause others to fail.
- Repeatable: A test should produce the same result every time it is run, regardless of the environment. It should not depend on external factors like the network or current date.
- Self-Validating: The test should automatically determine if it passed or failed. It should not require a human to manually inspect the output. Assertions provide this.
- Timely (or Thorough): Tests should be written in a timely manner, ideally just before or alongside the production code they test (as in TDD). They should also be thorough, covering edge cases and not just the “happy path.”
Test Design & Patterns
5. Compare Test-Driven Development (TDD) and Behavior-Driven Development (BDD).
- TDD (Test-Driven Development): A developer-focused practice where you write a failing unit test *before* you write the production code to make it pass. The cycle is “Red-Green-Refactor.” It focuses on the implementation details and correctness of a single unit of code.
- BDD (Behavior-Driven Development): An extension of TDD that focuses on the application’s overall behavior from the user’s perspective. It uses a natural language, structured format (like Gherkin’s `Given-When-Then`) to describe scenarios. This encourages collaboration between developers, QAs, and business analysts. BDD tests are typically higher-level integration or acceptance tests.
6. What is the Arrange-Act-Assert (AAA) pattern?
AAA is a pattern for structuring test methods to make them more readable and maintainable. It divides a test into three distinct sections:
- Arrange: Set up the initial state for the test. This involves creating objects, setting up mocks, and preparing any necessary test data.
- Act: Execute the specific method or unit of code that is being tested.
- Assert: Verify that the outcome of the “Act” phase is correct. This involves making one or more assertions about the result or the state of the system.
7. What is property-based testing?
Property-based testing is a technique where instead of writing tests for specific example inputs, you define the general properties or invariants of your code that should hold true for *any* valid input. The testing framework then generates hundreds or thousands of random inputs to try and find a counterexample that falsifies the property.
For example, instead of testing `add(2, 3) == 5`, you would state a property like “for any two integers a and b, `add(a, b) == add(b, a)`”. This is excellent for finding edge cases you wouldn’t think to write tests for manually. Libraries like `FsCheck` (.NET) or `jqwik` (Java) enable this.
Explore Property-Based Testing with FsCheck.8. What is the “testing trophy” concept?
The Testing Trophy is an alternative model to the Test Pyramid, proposed by Kent C. Dodds. It suggests a different distribution of testing effort, emphasizing integration tests.
The layers are, from largest to smallest:
- Static Analysis: (Linter, Type Checker) – The widest base.
- Unit Tests: Smaller than in the pyramid, focused on critical, complex logic.
- Integration Tests: The largest and most important part. These tests verify that multiple units work together as intended, providing the best trade-off between confidence and speed.
- End-to-End Tests: A very small number of full user journey tests.
Mocking & Test Doubles
9. Explain the difference between Mocks, Stubs, and Fakes.
These are all types of “test doubles,” but they have different purposes:
- Stub: Provides pre-canned answers to calls made during the test. It’s used when your test needs a dependency to return specific data to proceed. Stubs are primarily for state verification.
- Mock: An object on which you set expectations about how it will be called. After the test runs, you verify that the mock was called with the correct parameters and number of times. Mocks are primarily for behavior verification.
- Fake: A working implementation of the dependency, but simplified for testing purposes. It has real logic but is not suitable for production. A common example is an in-memory database.
10. What is the difference between state verification and behavior verification?
- State Verification: Checks the state of the system *after* the action has been performed. You call a method and then assert that a property on the object or a returned value is correct. This is typically done with stubs.
- Behavior Verification: Checks that certain methods were called on a collaborator object. You call a method and then verify that it made the correct calls to its dependencies (the mocks).
Many experts prefer state verification as it leads to tests that are less coupled to the implementation details of the code being tested.
11. Why can overusing mocks lead to brittle tests?
Overusing mocks, especially for behavior verification, can make your tests highly coupled to the *implementation* of your code, rather than its *behavior*. If you refactor the internal implementation of a method without changing its ultimate outcome, a mock-heavy test might break because the sequence or number of calls to a collaborator has changed. This makes refactoring difficult and creates tests that are a “change detector” rather than a true “bug detector.”
12. What is a test spy?
A spy is a type of test double that acts as a “wrapper” around a real object. It passes all calls through to the real object, but it also records information about how it was called (e.g., how many times a method was called, what arguments were passed). This allows you to use the real object’s logic in your test while still being able to verify specific interactions, combining aspects of stubs and mocks.
Integration Testing Strategies
13. Compare in-process vs. out-of-process integration tests.
- In-process: The test runs within the same process as the application being tested. This is common for web APIs, where a test runner can host the application in memory and make HTTP requests to it without going over the network. This is very fast and allows for easily mocking internal dependencies. `WebApplicationFactory` in .NET is a prime example.
- Out-of-process: The test runs against a fully deployed instance of the application (e.g., a running Docker container). The test communicates with the application over the network, just like a real client. This is a more realistic test but is slower and more complex to set up.
14. What is the purpose of Testcontainers?
Testcontainers** is a library that allows you to programmatically spin up and tear down real services inside Docker containers as part of your automated tests. Instead of using an in-memory database or mocking a message broker, you can run an integration test against a real, ephemeral instance of PostgreSQL, RabbitMQ, or any other service.
This provides a much higher degree of confidence that your application will work correctly with its real dependencies, without the complexity of managing external test environments.
Visit the official Testcontainers website.15. How would you manage test data for integration tests?
Effective test data management is crucial for reliable integration tests.
- Database State: Each test should run in a transaction that is rolled back at the end, or the database should be wiped and re-seeded before each test run. Frameworks like Laravel’s `RefreshDatabase` trait automate this.
- Data Creation: Use “Object Mothers” or “Factories” to programmatically create test data. This makes the setup explicit and readable, e.g., `create_user(with_role: ‘admin’)`.
- Isolation: Avoid letting tests depend on pre-existing data in a shared test database. Each test should create all the data it needs to run independently.
16. How do you test interactions between microservices?
Testing the interaction between microservices is complex. A full end-to-end test can be slow and brittle. A better approach is **Consumer-Driven Contract Testing** with a tool like Pact. The consumer service defines a “contract” specifying the requests it makes and the responses it expects. This contract is then run against the provider service in its CI pipeline to ensure the provider still honors the contract. This verifies the integration without actually having to deploy both services together.
Learn more about Consumer-Driven Contract Testing with Pact.Advanced & Modern Concepts
17. What is mutation testing?
Mutation testing is a technique used to evaluate the quality of your existing unit tests. A mutation testing tool will take your production code and intentionally introduce small “mutations” or bugs (e.g., changing a `>` to a `<` or a `+` to a `-`). It then runs your entire test suite.
If your tests fail, the mutant is “killed,” which is good—it means your tests were able to catch the bug. If your tests still pass, the mutant “survives,” which indicates a weakness in your test suite. It’s a powerful way to find gaps in your tests that code coverage alone cannot detect.
Explore Mutation Testing with Stryker.18. Is 100% code coverage a good goal? Explain your reasoning.
No, 100% code coverage is generally not a good goal. While high coverage is desirable, striving for 100% often leads to diminishing returns and can be counterproductive.
- It encourages writing low-value tests just to cover trivial code like simple getters and setters.
- It provides a false sense of security. 100% coverage doesn’t mean your tests have good assertions or that they test for every important behavior.
- The effort required to get from 90% to 100% is often very high and that time could be better spent writing more meaningful integration tests or performing exploratory testing.
A healthy coverage percentage (e.g., 80-90%) combined with code reviews and other testing methods is a more pragmatic goal.
19. How do you test asynchronous code?
Testing asynchronous code requires mechanisms to wait for the operation to complete before making assertions. Most modern testing frameworks have built-in support for this.
- In JavaScript, you can use `async/await` directly in your tests with frameworks like Jest.
- In Java, you would use libraries like Awaitility to poll for a condition to be met within a certain timeout.
- For specific technologies like Kotlin Coroutines, you would use a `TestCoroutineDispatcher` and a `runTest` block to control the execution flow and time.
The key is to avoid using manual `Thread.sleep()` calls, which are unreliable and slow down your tests.
20. What is “flaky test” and what are some common causes?
A flaky test is a test that passes and fails intermittently without any changes to the code. They are highly disruptive to a CI/CD pipeline as they erode trust in the test suite.
Common causes include:
- Asynchronous Race Conditions: The test makes an assertion before an asynchronous operation has had time to complete.
- Test Order Dependency: The test relies on another test having run first to set up some state.
- Infrastructure Instability: The test relies on an external service that is unreliable.
- Time-based Logic: The test makes assumptions about the current date or time, which can fail at boundaries (e.g., midnight).
21. What is snapshot testing?
Snapshot testing is a technique primarily used for testing UI components, but it can also be used for API responses. The first time the test is run, it takes a “snapshot” of the rendered output (e.g., a component’s HTML or an API’s JSON response) and saves it to a file.
On subsequent runs, the test generates a new output and compares it to the saved snapshot. If they do not match, the test fails. This is a quick way to ensure that changes to your UI or API response are intentional. The developer can then inspect the difference and either fix the code or explicitly update the snapshot if the change was intended.


