I still remember the first time a team handed me a test suite with 1,800 tests and a single command: pytest. The suite passed, but the signal was awful. A flaky database test would fail and drown out the fast unit tests I actually needed. We didn’t need fewer tests—we needed better grouping. Grouping lets you run the right subset at the right time: quick unit tests on every save, integration checks on PRs, and slow end-to-end runs on nightly builds. That’s the difference between a test suite that helps you ship and one that trains you to ignore failures.
I’ll show you how I group tests in pytest using three practical patterns: organizing with test classes, using markers, and splitting tests across files and directories. I’ll also cover when I use each approach, common mistakes I see, and how I connect grouping to modern 2026 workflows like AI-assisted test selection and parallel execution. You’ll get runnable examples and actionable commands so you can refactor your test suite without starting from scratch.
The baseline: a tiny module and a simple test file
When I teach grouping, I start with a tiny codebase so you can see the effect immediately. Here’s a module with two logical domains—algebra and geometry—and a single test file that exercises everything.
# mathfuncs.py
‘‘‘mathfuncs.py module‘‘‘
class Algebra:
def square(x):
return x2
def cube(x):
return x3
class Geometry:
def is_triangle(a, b, c):
return a + b + c == 180
def is_quadrilateral(w, x, y, z):
return w + x + y + z == 360
# test_mathfuncs.py
‘‘‘test_mathfuncs.py test file‘‘‘
import mathfuncs
def test_square():
assert mathfuncs.Algebra.square(40) == 1600
assert mathfuncs.Algebra.square(5) == 25
def test_cube():
assert mathfuncs.Algebra.cube(40) == 64000
assert mathfuncs.Algebra.cube(5) == 125
def testistriangle():
assert mathfuncs.Geometry.is_triangle(120, 40, 20) is True
assert mathfuncs.Geometry.is_triangle(45, 67, 99) is False
def testisquadrilateral():
assert mathfuncs.Geometry.is_quadrilateral(350, 5, 5, 0) is True
assert mathfuncs.Geometry.is_quadrilateral(11, 22, 33, 44) is False
Running pytest -v will collect all four tests. That’s fine at this size, but once you have hundreds of tests, you need a way to target a subset without changing code. That’s where grouping enters.
Grouping with test classes: the simplest structural boundary
The fastest way to group related tests is to place them in test classes. Pytest doesn’t require classes, but it treats them as a natural container you can target from the command line. I use this for logical grouping within a single file when the tests share a domain or fixture.
# test_mathfuncs.py
import mathfuncs
class Test_Algebra:
def test_square(self):
assert mathfuncs.Algebra.square(40) == 1600
assert mathfuncs.Algebra.square(5) == 25
def test_cube(self):
assert mathfuncs.Algebra.cube(40) == 64000
assert mathfuncs.Algebra.cube(5) == 125
class Test_Geometry:
def testistriangle(self):
assert mathfuncs.Geometry.is_triangle(120, 40, 20) is True
assert mathfuncs.Geometry.is_triangle(45, 67, 99) is False
def testisquadrilateral(self):
assert mathfuncs.Geometry.is_quadrilateral(350, 5, 5, 0) is True
assert mathfuncs.Geometry.is_quadrilateral(11, 22, 33, 44) is False
Run a class directly:
pytest testmathfuncs.py::TestAlgebra -vpytest testmathfuncs.py::TestGeometry -v
I use classes when the grouping is about shared setup. For example, if all geometry tests need a fixture for angle validation, a class-level fixture keeps the file tidy. It also gives me a stable target for CI, like pytest tests/math/testmathfuncs.py::TestGeometry.
Common mistakes with test classes
- Naming the class without
Testprefix: pytest ignores classes that don’t start withTest. UseTestorTest. - Using
init: pytest won’t collect tests in classes with aninit. If you need setup, use fixtures. - Shared state in class attributes: it makes tests order-dependent. Prefer fixtures or local variables inside each test.
When I use this pattern
- You have a single file with multiple domains.
- Tests share fixtures or setup within a domain.
- You want a simple CLI target without touching markers or config.
When I avoid it
- The grouping is orthogonal to code structure. For example, “fast” vs “slow” doesn’t map to code modules. That’s a marker problem, not a class problem.
Grouping with markers: flexible, cross-cutting slices
Markers are my go-to when grouping doesn’t align with file or class structure. They let you select tests by meaning: algebra, geometry, integration, slow, api, db, contract, and so on. This is essential for CI pipelines where you need multiple slices of the same test suite.
Here’s the same test file with markers:
# test_mathfuncs.py
import mathfuncs
import pytest
@pytest.mark.algebra
def test_square():
assert mathfuncs.Algebra.square(40) == 1600
assert mathfuncs.Algebra.square(5) == 25
@pytest.mark.algebra
def test_cube():
assert mathfuncs.Algebra.cube(40) == 64000
assert mathfuncs.Algebra.cube(5) == 125
@pytest.mark.geometry
def testistriangle():
assert mathfuncs.Geometry.is_triangle(120, 40, 20) is True
assert mathfuncs.Geometry.is_triangle(45, 67, 99) is False
@pytest.mark.geometry
def testisquadrilateral():
assert mathfuncs.Geometry.is_quadrilateral(350, 5, 5, 0) is True
assert mathfuncs.Geometry.is_quadrilateral(11, 22, 33, 44) is False
Run a marker group:
pytest -m algebra -vpytest -m geometry -v
You can also combine markers with boolean logic:
pytest -m "algebra and not slow"pytest -m "geometry or algebra"
Register markers to avoid warnings
Pytest warns about unknown markers unless you register them. I recommend adding a pytest.ini in your repo root:
# pytest.ini
[pytest]
markers =
algebra: algebra-related tests
geometry: geometry-related tests
slow: tests that take noticeable time
integration: tests that touch external systems
This also serves as documentation. Teams in 2026 often use AI agents to infer test intent from markers; having consistent marker definitions makes that automation reliable.
Common mistakes with markers
- Forgetting
pytestimport: the decorator won’t resolve. - Too many markers: every new marker is cognitive load. I try to keep the primary set under 10.
- Markers that encode structure: if
algebramaps directly to a folder, you probably want file grouping instead.
When I use this pattern
- You need cross-cutting slices (fast vs slow, unit vs integration).
- Tests in different files should run together in one command.
- You want CI stages to select subsets without touching paths.
When I avoid it
- Grouping maps cleanly to files and directories; markers would just duplicate structure.
Grouping by files and directories: the most scalable structure
When a test suite grows, grouping by filesystem is the most natural approach. The path becomes part of the selection. This scales better than long class names or massive marker lists.
Here’s a typical layout I recommend:
project/
src/
mathfuncs.py
tests/
algebra/
test_algebra.py
geometry/
test_geometry.py
integration/
testmathservice.py
Example tests split by domain:
# tests/algebra/test_algebra.py
import mathfuncs
def test_square():
assert mathfuncs.Algebra.square(40) == 1600
assert mathfuncs.Algebra.square(5) == 25
def test_cube():
assert mathfuncs.Algebra.cube(40) == 64000
assert mathfuncs.Algebra.cube(5) == 125
# tests/geometry/test_geometry.py
import mathfuncs
def testistriangle():
assert mathfuncs.Geometry.is_triangle(120, 40, 20) is True
assert mathfuncs.Geometry.is_triangle(45, 67, 99) is False
def testisquadrilateral():
assert mathfuncs.Geometry.is_quadrilateral(350, 5, 5, 0) is True
assert mathfuncs.Geometry.is_quadrilateral(11, 22, 33, 44) is False
Run only geometry tests:
pytest tests/geometry -v
Run only algebra tests:
pytest tests/algebra -v
Run a single file:
pytest tests/geometry/test_geometry.py -v
Why this works so well
- Discoverability: new teammates find tests by browsing folders.
- Parallelism: large suites split easily across workers.
- CI clarity: path-based jobs are simple and stable.
Common mistakes with file grouping
- Breaking discoverability: naming files without
test_prefix. Pytest won’t collect them. - Mixing domains in a single file: you defeat the purpose of grouping by path.
- Over-nesting: too many directory levels make commands awkward. Keep it shallow.
When I use this pattern
- You have a medium-to-large test suite.
- Your codebase is modular (services, domains, features).
- You want CI to run slices in parallel.
When I avoid it
- Tiny projects where file overhead outweighs the benefit.
Comparing grouping methods: traditional vs modern workflow
Grouping isn’t just a style choice; it drives how fast you can iterate. I use a mix of all three methods depending on stage and scale.
Traditional approach
My recommendation
—
—
One test file, no grouping
Use classes if you see two domains
Files per module
Files + light markers
One giant tests/
Path-based splits with markers for cross-cutting runs
Manual selection
Keep grouping clear so AI inference is reliableThe key is to make grouping obvious to both humans and automation. If your naming is consistent, modern tooling can do smart test selection without brittle heuristics.
Practical examples: how I group in real projects
Here are patterns I apply in production codebases:
1) Unit vs integration vs system
I use folders for major tiers, then markers for cross-cutting aspects like slow or flaky:
tests/
unit/
integration/
system/
Then add markers:
@pytest.mark.slow
@pytest.mark.integration
def testpostgresroundtrip():
...
Run unit tests for local development:
pytest tests/unit -q
Run integration tests in CI:
pytest tests/integration -m "not slow" -v
2) Feature slices for product teams
When a team owns a feature, I group by feature name:
tests/
checkout/
catalog/
recommendations/
This makes onboarding easier. Engineers can run pytest tests/checkout and trust that they’re covering their area.
3) Marker-driven pipelines
A CI pipeline might run:
pytest -m "unit"on every pushpytest -m "integration and not slow"on PRspytest -m "slow"nightly
This avoids the temptation to skip tests entirely.
Performance and feedback loops
Grouping affects how quickly you see failures. A fast feedback loop is worth more than a huge suite that runs once per day. In my experience, developers stay engaged if a local run finishes in 2–15 seconds. Integration suites typically take 30–180 seconds, and full end-to-end suites can take several minutes. Grouping lets you choose the right loop for the task.
If your suite is slow, you can reduce friction by:
- Running unit tests on save and integration tests on commit.
- Adding a
-m "not slow"default for local runs. - Splitting long-running tests into a separate directory.
These decisions are often the difference between “tests are helpful” and “tests are ignored.”
Common mistakes I see in grouped test suites
Here’s the short list I keep in mind during reviews:
- Mixing unit and integration tests in one file: you can’t select them cleanly. Separate by directory or markers.
- Using markers without registering them: it produces warnings and confuses tooling.
- Overloading classes: a class with 40 tests often means you’re hiding subdomains. Break it up.
- Naming groups after implementation details: choose names that reflect behavior or domain, not internal modules that might change.
- Skipping documentation: define grouping rules in
pytest.inior a simpleTESTING.mdso new teammates don’t guess.
If you fix just one of these, your test suite already becomes more reliable.
When not to group
Grouping is useful, but it can be overdone. I hold back when:
- The project is tiny and the command
pytestis already fast. - There is no clear domain separation; grouping would be arbitrary.
- You’ll spend more time maintaining the grouping than the tests themselves.
If you’re in that situation, I suggest waiting until the suite grows or until there’s a real need for a subset run.
Real-world edge cases
Grouping gets tricky in real codebases. Here are a few issues I’ve hit and how I handle them:
Overlapping categories
A test might be both integration and slow. That’s not a problem—pytest supports multiple markers. I use this to express a matrix of concerns, like “integration but fast.”
Parameterized tests
Pytest parameterization can create many test cases. Grouping still works, but you should be aware that -m or path filters will collect all parameterized cases. If one case is slow, it makes the whole group slow. I sometimes split parameter sets into separate tests if the runtime differs by a lot.
Shared fixtures across groups
If a fixture is used by multiple groups, keep it in conftest.py at the correct directory level. A common mistake is placing it in the wrong folder so tests outside that folder can’t see it.
Incomplete grouping during migrations
When moving from a flat suite to grouped tests, it’s easy to forget old files. I recommend running pytest --collect-only and verifying that every test lives in the intended group. This is a simple sanity check that catches stragglers.
Putting it all together: a modern grouping workflow
Here’s the approach I use on new projects in 2026:
1) Start with file-based grouping as the default. Keep tests close to features.
2) Add test classes when a file holds multiple related areas or needs shared fixtures.
3) Add markers only for cross-cutting concerns like slow, integration, or api.
4) Document markers in pytest.ini and add short guidelines in TESTING.md.
5) Use CI to enforce the grouping—run fast groups on every push, slow groups on a schedule.
This balances clarity with flexibility. It also works well with AI-assisted tooling that selects tests based on file changes and markers.
Deeper example: grouping with fixtures, parametrization, and markers
Let me expand the tiny math example to show how grouping really plays out in day-to-day work. I’ll add input validation, a fixture for shared data, and parameterized tests that are tagged by domain. This gives you a realistic structure without adding noise.
# mathfuncs.py
class Algebra:
def square(x):
if not isinstance(x, (int, float)):
raise TypeError("x must be numeric")
return x2
def cube(x):
if not isinstance(x, (int, float)):
raise TypeError("x must be numeric")
return x3
class Geometry:
def is_triangle(a, b, c):
for v in (a, b, c):
if not isinstance(v, (int, float)):
raise TypeError("angle must be numeric")
return a + b + c == 180
def is_quadrilateral(w, x, y, z):
for v in (w, x, y, z):
if not isinstance(v, (int, float)):
raise TypeError("angle must be numeric")
return w + x + y + z == 360
# tests/conftest.py
import pytest
@pytest.fixture
def numeric_cases():
return [
(0, 0),
(1, 1),
(2, 4),
(-3, 9),
]
# tests/algebra/test_algebra.py
import pytest
import mathfuncs
@pytest.mark.algebra
@pytest.mark.parametrize("x, expected", [(2, 4), (3, 9), (10, 100)])
def testsquareparam(x, expected):
assert mathfuncs.Algebra.square(x) == expected
@pytest.mark.algebra
@pytest.mark.parametrize("x, expected", [(2, 8), (3, 27), (10, 1000)])
def testcubeparam(x, expected):
assert mathfuncs.Algebra.cube(x) == expected
@pytest.mark.algebra
def testsquarerejectsnonnumeric():
with pytest.raises(TypeError):
mathfuncs.Algebra.square("a")
# tests/geometry/test_geometry.py
import pytest
import mathfuncs
@pytest.mark.geometry
@pytest.mark.parametrize("angles, expected", [
((120, 40, 20), True),
((45, 67, 99), False),
])
def testistriangle_param(angles, expected):
assert mathfuncs.Geometry.is_triangle(*angles) is expected
@pytest.mark.geometry
def testisquadrilateral_basic():
assert mathfuncs.Geometry.is_quadrilateral(350, 5, 5, 0) is True
@pytest.mark.geometry
def testistrianglerejectsnon_numeric():
with pytest.raises(TypeError):
mathfuncs.Geometry.is_triangle(60, "x", 60)
What this buys you is flexibility. You can run by path (pytest tests/geometry -v), by marker (pytest -m geometry -v), or by combination (pytest tests -m "geometry and not slow"). You also get stronger documentation since each file and marker gives a clear purpose.
New H2: Migrating from a flat suite to grouped tests without pain
If your project already has a flat tests/ directory with hundreds of files, regrouping can feel risky. I break it into a simple, low-risk migration that keeps the suite green at every step.
Step 1: Create target folders without moving files
First I create the target structure, even if it is empty:
tests/
unit/
integration/
system/
This sets expectations for the team and avoids surprise naming arguments later.
Step 2: Move small clusters and keep diff tight
Instead of moving everything at once, I move one cluster (e.g., user or checkout) and run pytest. The goal is a series of small, reviewable commits. The suite should still run at the root, because pytest discovers recursively.
Step 3: Add markers only when you need cross-cutting slices
I avoid adding markers during the initial move. I only add them when I need a grouping that doesn’t map to directories. That helps keep the marker list short and meaningful.
Step 4: Update docs and CI incrementally
I update CI after the first few clusters to avoid a giant switch. For example, add a tests/unit job and keep pytest (full suite) on nightly for a week. When the new job is stable, I retire the old one.
Step 5: Use --collect-only as a migration check
When I think I’m done, I run:
pytest --collect-only -q
It shows exactly what pytest is discovering, and it helps catch files that were accidentally renamed or moved to a non-discoverable path.
New H2: Marker strategy that stays sane over time
Markers can become a mess if you let them sprawl. I keep a marker strategy that acts like an API: small, consistent, and stable.
The core marker set I keep under 10
unit: fast, isolated, no external IOintegration: uses external systems (db, cache, network)system: end-to-end at the system boundaryslow: anything that consistently exceeds your local thresholdflaky: only when you can’t fix it yet (with a ticket link)api: tests for external API boundariesdb: tests that hit a real database
This list is not mandatory, but it keeps names clear and avoids redundant tags like fast, quick, tiny, which all mean the same thing in practice.
A naming rule that prevents confusion
I avoid markers that mirror module names. If I already have tests/geometry, I don’t add @pytest.mark.geometry unless I need a cross-file group. This keeps marker usage meaningful, not decorative.
Avoid “catch-all” markers
Markers like critical or important sound useful, but they usually become subjective and inconsistent. If you need a “smoke test” subset, define it by behavior (system tests for the login flow) rather than by labels that everyone interprets differently.
New H2: Using custom test selection flags and -k patterns
Grouping isn’t only about directories and markers. Sometimes you need a one-off slice based on test names. That’s where -k is handy.
Example: name-based selection
If your tests are named well, -k can be a quick filter:
pytest -k trianglepytest -k "triangle and not slow"
This is not a replacement for structured grouping, but it helps when you’re exploring a failing area or running only a small subset while debugging.
Naming conventions that make -k useful
I prefix test functions with consistent action verbs or domains:
testtriangleinvalidanglesis more useful thantestinvalid_1.testapireturns401is more useful thantestauth_case.
If your names are descriptive, -k becomes a mini search tool for tests.
New H2: Advanced directory layout patterns I’ve seen work
File grouping is where most suites live and grow. I’ve seen several layouts that scale well. Here’s how I pick between them.
Pattern 1: Layered by test tier
tests/
unit/
integration/
system/
Use this when your tier boundaries are stable and well-defined. It’s good for strong quality gates and predictable CI jobs.
Pattern 2: Feature-first, tier-second
tests/
checkout/
unit/
integration/
catalog/
unit/
integration/
Use this when teams are organized by features or services. The module owner can run all tests for their feature and still filter by tier.
Pattern 3: Service-first in a monorepo
services/
billing/
tests/
auth/
tests/
Use this when each service can run independently. It keeps the test suite near the service boundary and makes it easier to run “local service only” tests.
Choosing the right pattern
I ask two questions:
1) Who owns the code and how do they think about it?
2) What grouping do I expect to run in CI and locally?
If the answer is “by team or feature,” I lean feature-first. If the answer is “by tier,” I go tier-first. Both work; consistency matters more than the specific choice.
New H2: Handling slow or flaky tests without breaking trust
Grouping isn’t just about selection; it’s also about trust. Slow tests and flaky tests destroy trust. Grouping gives you a way to isolate them so they don’t dominate every run.
The slow test playbook I use
1) Tag with @pytest.mark.slow.
2) Remove from default runs (-m "not slow").
3) Give them a schedule (nightly or on main branch only).
4) Track trends: if slow tests grow, treat that as tech debt.
The flaky test playbook I use
1) Tag with @pytest.mark.flaky and add a link to a tracking issue.
2) Run them separately so they don’t mask signal.
3) Do not make flaky part of your main test run.
4) Remove the marker as soon as the root cause is fixed.
Why isolating slow tests increases quality
It sounds counterintuitive, but when fast tests are reliable and quick, developers run them more often and catch errors earlier. That raises overall quality more than a slow suite that nobody runs.
New H2: Grouping and parallel execution
Parallel execution becomes truly effective when your test suite is grouped sensibly. If you split by directories or markers, you can run multiple groups at once with a tool like pytest-xdist.
Example parallel commands
pytest -n auto tests/unitpytest -n 4 -m "integration and not slow"
The key to good parallel splits
- Avoid massive, unbalanced groups. If one directory has 500 tests and the rest have 50, that one worker becomes your bottleneck.
- Aim for equal-ish counts across groups, or split by runtime rather than count.
A simple rule of thumb
If a group consistently takes 2x–3x longer than the others, split it further by domain or by type. You don’t need to perfect it; you just want the slow tail to shrink.
New H2: Grouping and AI-assisted test selection in 2026
Most teams now use some form of AI-assisted test selection: tooling that looks at the changed files and recommends a subset of tests. Grouping makes those tools accurate.
How grouping helps AI-based selection
- Clear folder names make it easier to map code changes to tests.
- Markers provide explicit meaning (fast, integration, db) instead of implicit guesses.
- Consistent naming reduces false positives (running too many tests) and false negatives (missing critical tests).
How I prepare a suite for AI selection
- Keep group names stable over time.
- Avoid renaming directories casually.
- Document your grouping rules in
TESTING.md. - Use markers sparingly and define them in
pytest.ini.
The goal is to make the suite interpretable both by humans and automation. If the structure is clear to a new engineer, it will likely be clear to an AI agent too.
New H2: Practical CI recipes for grouped tests
Here are three CI recipes I’ve used repeatedly. They’re simple, but they encourage good behavior.
Recipe 1: Fast local + full nightly
- Local developer:
pytest tests/unit -q - PR:
pytest tests/unit -q+pytest tests/integration -m "not slow" - Nightly:
pytest -m "slow"
Recipe 2: Feature-based ownership
pytest tests/checkoutfor checkout changespytest tests/catalogfor catalog changespytest tests/recommendationsfor recommendations changes
Recipe 3: Risk-based execution
pytest -m "unit"alwayspytest -m "integration and not slow"on PRspytest -m "slow or flaky"nightly or on a scheduled job
These recipes work because grouping is aligned to how people work, not just how the test suite is organized.
New H2: Alternative approaches to grouping (and why I still prefer pytest’s native patterns)
There are other ways to group tests that people sometimes reach for. Here’s how I evaluate them.
Using custom test runners or wrappers
Some teams write wrappers like ./run_tests.sh unit or python -m tests.run --group unit. These can work, but they often hide pytest’s native selection features and require more maintenance. I still prefer native pytest -m and path selection because they’re transparent and already understood by most developers.
Using naming conventions only
You can group by naming: testunit, testintegration. It’s a valid approach, but it’s brittle. As the suite grows, naming conventions become harder to enforce than directory boundaries or markers.
Splitting into multiple repositories
Some teams split integration tests into a separate repo. That gives a clean boundary, but it introduces duplicate setup and version drift. I only do this when the test suite requires completely separate infrastructure or release timelines.
In most cases, pytest’s native grouping mechanisms are powerful enough and more consistent with the ecosystem.
New H2: Performance considerations and realistic expectations
It’s tempting to chase exact speed targets, but I focus on ranges and behavior. Here’s what I aim for:
- Local unit test run: roughly 2–15 seconds for common tasks.
- Integration run: roughly 30–180 seconds, depending on external services.
- Full end-to-end: several minutes, scheduled or on main branch.
These aren’t hard rules; they’re feedback loop targets. If a “unit” run regularly takes 60 seconds, it stops being a unit run in practice. That’s your signal to re-group or move expensive tests into a slower tier.
New H2: Grouping and test data management
Grouping changes how you handle test data. Fast tests need fast data, while integration tests can afford real or semi-real data.
Tips for fast groups
- Use in-memory data or fixtures that don’t hit the filesystem.
- Generate test data quickly with factories rather than loading large fixtures.
- Avoid network calls, even to local services.
Tips for integration groups
- Seed databases once per test session using session-scoped fixtures.
- Keep data sets minimal and focused.
- Clean up after tests to avoid cross-test contamination.
Grouping makes these decisions clearer. When a test belongs to integration, it can use integration data practices. When a test belongs to unit, it should stay lightweight.
New H2: Documentation that makes grouping stick
Even a good grouping structure fails without documentation. I keep it short and practical. Here’s what I add to TESTING.md:
- A 10-line explanation of the grouping structure.
- The primary pytest commands for local work and CI.
- A list of markers and what they mean.
- A policy on slow and flaky tests.
This takes less than a page, but it avoids months of confusion.
Key takeaways and what I’d do next
Grouping tests is less about tooling and more about discipline. When you separate tests by purpose and speed, you protect your feedback loops and keep your suite trustworthy. The simplest path I recommend is: start with directory grouping, add classes for shared setup, and add markers only for cross-cutting concerns. That structure scales from a tiny module to a multi-team codebase.
If I were in your repo today, I’d do three things next:
1) Create a clear directory layout (by tier or feature).
2) Register a minimal marker set in pytest.ini.
3) Add a short TESTING.md so the rules are obvious to everyone.
Once those are in place, your suite is ready for selective runs, parallel execution, and AI-assisted test selection—without turning your test strategy into a maintenance burden.
That’s the real goal: tests you trust, grouped in a way that matches how you build and ship software.



