Skip to content

test / it, and describe should follow the same pattern as @playwright/test for test context extensions #1660

@ScottAwesome

Description

@ScottAwesome

Clear and concise description of the problem

As a developer I'd love if the API for test context (fixtures) mirrored that of playwright test

This would be better for the following reasons:

  • It makes context modifications easier to share
  • Its not reliant on setting up beforeEach before a test to modify its context. You could easily share customized test / describe and/or it blocks. Using beforeEach is clunky and not easily sharable.
  • Its (perhaps subjectively) more ergonomic and discoverable. Certainly works better for typing with TypeScript, which for large teams (especially enterprises) this is a huge win.

Suggested solution

Allow for test context to be set on the test function level, via test /it, and/or describe. e.g.

// my-test.ts
import { describe as base } from 'vitest;
import type { Todo } from './todo';
import * as faker from '@faker-js/faker';

// Declare the types of your fixtures.
interface TodoFixtures {
  todos: Todo[];
};

// Extend base test by providing "todo"
// This new "describe" can be used in multiple test files, and each of them will get the fixtures.
export const describe = base.extend<TodoFixtures>({
  todo: async (use) => {
    let i = 0;
    const todos = [];
    for (i; i < 25; i++) {
           todos.push({
              due: faker.date.between('2020-01-01T00:00:00.000Z', '2030-01-01T00:00:00.000Z'),
              assigned: faker.name.findName(),
              text: faker.lorem.lines()
          })
     }
     use(todos)
  },
});

Alternative

No response

Additional context

This is largely brought about because on my team of 24+ engineers we would like to be able to have standard patterns for faking data. One of the requirements when it comes to sharing API data is having fake datasets that can be re-used for testing and tested against schema so that can test that our validators work correctly. Its also the same fake data factory datasets used for our API test responses, and for data transformers, among other things. We really started to scale the @playwright/test approach well and our team is finding this API to be rather clunky by comparison, and having to declare a module augmentation adds more overhead and is prone to error.

Validations

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions