I keep seeing the same bug pattern in code reviews: a base class defines a helpful @classmethod, a subclass overrides it, and suddenly the system stops behaving consistently. Sometimes the subclass forgets to call the base logic. Sometimes it calls the base method but hard-codes the parent class instead of using cls, quietly breaking polymorphism. And sometimes everything works… until multiple inheritance enters the picture and your “simple override” becomes a subtle MRO problem.\n\nIf you build libraries, internal frameworks, OR just want a cleaner object model, extending class methods is one of those skills that pays back every week. Done well, it lets you keep shared logic centralized while still letting subclasses customize behavior safely. Done poorly, it turns inheritance into a guessing game.\n\nHere’s what I’ll do in this post: I’ll explain what extending a class method really means in practice, show patterns I use in production (factories, registries, template methods), and call out the mistakes that cause the weird bugs. You’ll come away with a mental model for cls, super(), and cooperative inheritance—plus runnable examples you can adapt.\n\n## Class Methods: The Mental Model I Use\nA class method is a function bound to the class object, not an instance. The key detail is the first argument: cls (the class that the method was called on), not self (an instance).\n\nI like a simple analogy: an instance method is “how this particular car starts,” while a class method is “how the car brand defines starting behavior.” That’s not perfect, but it gets you thinking in the right direction: class methods express behavior at the type level.\n\nA few implications that matter when you extend them:\n\n- cls is dynamic. If you call SportsCar.start(), inside the inherited class method, cls will be SportsCar.\n- Class methods are great for alternate constructors, registries, and policies that depend on the subclass.\n- Overriding a class method in a subclass is normal. Extending it (calling the parent logic and adding more) is where super() becomes your best friend.\n\nWhen you extend a class method, you’re usually trying to preserve shared behavior from a base class while layering specialization in the subclass.\n\n### What “bound to the class” really means\nUnder the hood, @classmethod is a descriptor. When you access it as MyClass.method, Python returns a callable where the first argument will be the class object you accessed it through. That’s why cls is “who called me,” not “where was I originally defined.”\n\nThis one detail explains a lot of best practices:\n\n- If a base class method constructs objects, it should call cls(...) so subclasses get instances of the subclass.\n- If a base class method needs to find “the next implementation,” it should call super() rather than naming a parent explicitly, so the MRO can do its job.\n\n### Class methods vs instance methods vs static methods (how I decide)\nI keep a quick decision rule in my head:\n\n- Use an instance method when behavior depends on instance state (self) and should be polymorphic per object.\n- Use a class method when behavior depends on type-level configuration (cls) and should remain polymorphic per subclass.\n- Use a static method when it’s a utility colocated for namespacing, but you do not want or need polymorphism.\n\nThe second bullet is the key: if subclassing is part of the design, @classmethod is often the cleanest way to keep call sites simple and return the right types.\n\n## Extending with super(): The Core Technique\nExtending a class method typically means:\n\n1) Override the method in a subclass.\n2) Call the parent implementation via super().\n3) Add subclass-specific behavior.\n\nHere’s the smallest example that demonstrates “inherit base logic, then add more”:\n\nPython:\n\n class Service:\n @classmethod\n def endpoint(cls) -> str:\n return "/api"\n\n class BillingService(Service):\n @classmethod\n def endpoint(cls) -> str:\n return f"{super().endpoint()}/billing"\n\n print(BillingService.endpoint())\n\nOutput:\n\n /api/billing\n\nA couple of important details:\n\n- super().endpoint() inside a @classmethod works exactly like you want: it resolves the next method in the MRO and calls it.\n- You should almost never write Service.endpoint() here. Doing so bakes in the parent class and breaks extensibility.\n\n### A quick note on super() in class methods\nIn modern Python, zero-argument super() inside a method is the default choice. It’s readable and keeps refactors safe.\n\nIf you’re inside a @classmethod, you may see older code doing super(CurrentClass, cls).method(). It works, but it’s noisier and easier to get wrong during refactors.\n\n### “Extend” means you preserve the contract, not just the name\nWhen I say “extend a class method,” I don’t mean “override it and do something else.” I mean “keep the base behavior’s guarantees” and then layer customization.\n\nFor example, if the base method promises:\n\n- it validates inputs\n- it normalizes formats\n- it emits metrics\n- it returns an instance of cls\n\n…then the extension must preserve those guarantees unless the subclass very explicitly changes the contract (and documents it).\n\nIf you want a different contract, it’s often cleaner to give it a different method name rather than surprise callers.\n\n## cls Is Your Contract: Avoid Hard-Coding Classes\nExtending class methods is usually about letting the call-site choose behavior by calling the subclass:\n\n- Base.parse(...) returns a base type\n- JsonConfig.parse(...) returns a JSON config type\n- YamlConfig.parse(...) returns a YAML config type\n\nThat only holds if the base class uses cls consistently.\n\nHere’s a real-world style pattern: an alternate constructor that returns the correct subtype.\n\nPython:\n\n from future import annotations\n from dataclasses import dataclass\n from typing import Self\n import json\n\n @dataclass(frozen=True)\n class Config:\n values: dict[str, object]\n\n @classmethod\n def fromtext(cls, text: str) -> Self:\n data = cls.parse(text)\n # Important: use cls(…) not Config(…)\n return cls(values=data)\n\n @classmethod\n def parse(cls, text: str) -> dict[str, object]:\n # Base behavior: treat as JSON\n return json.loads(text)\n\n\n @dataclass(frozen=True)\n class StrictConfig(Config):\n @classmethod\n def parse(cls, text: str) -> dict[str, object]:\n data = super().parse(text)\n if "env" not in data:\n raise ValueError("Missing required key: env")\n return data\n\n\n c = StrictConfig.fromtext(‘{"env": "prod", "feature": true}‘)\n print(type(c).name, c.values["env"])\n\nOutput:\n\n StrictConfig prod\n\nWhy I like this pattern:\n\n- The base class defines the public API (fromtext).\n- Subclasses override a narrower hook (parse) and extend it via super().\n- The return type stays correct (Self), which helps type checkers and IDEs.\n\nThis is a “template method” style design, but implemented with class methods.\n\n### Typing note: Self vs TypeVar (what I actually do)\nIf you’re using Python versions that support typing.Self, it’s usually the most readable way to express “returns an instance of whatever subclass called this.”\n\nIf you’re stuck on older versions, you can replicate it with a TypeVar bound to the base class:\n\nPython:\n\n from future import annotations\n from typing import TypeVar\n\n T = TypeVar("T", bound="Config")\n\n class Config:\n @classmethod\n def fromtext(cls: type[T], text: str) -> T:\n …\n\nThe idea is the same: the class method is polymorphic, and the type signature should reflect that.\n\n### A subtle but important rule: avoid “helpful” base-class constructors\nA base class method that constructs instances should almost always return cls(...) and never Base(...), but there’s a second trap: calling other constructors internally that are not polymorphic.\n\nFor example, if fromtext() calls Config(...) somewhere deep in helper logic, you’ll accidentally “collapse” subclass calls back into base instances. When you refactor, search for hard-coded class construction and make it intentional.\n\n## Extending Class Methods for Factories (Without Duplicating Logic)\nFactories are where class methods shine. The big win is: your base factory can do shared work (validation, normalization, metrics), while subclasses customize only the pieces they care about.\n\n### Pattern: Base factory + subclass hooks\nLet’s model a token system where all tokens share core parsing rules, but different token types add constraints.\n\nPython:\n\n from future import annotations\n from dataclasses import dataclass\n from typing import Self\n import re\n\n @dataclass(frozen=True)\n class ApiToken:\n raw: str\n\n TOKENRE = re.compile(r"^[A-Z0-9]{8}\.[A-Z0-9]{8}$")\n\n @classmethod\n def parse(cls, text: str) -> Self:\n normalized = cls.normalize(text)\n cls.validate(normalized)\n return cls(raw=normalized)\n\n @classmethod\n def normalize(cls, text: str) -> str:\n return text.strip().upper()\n\n @classmethod\n def validate(cls, normalized: str) -> None:\n if not cls.TOKENRE.match(normalized):\n raise ValueError("Token must look like ABCD1234.WXYZ5678")\n\n\n @dataclass(frozen=True)\n class AdminToken(ApiToken):\n @classmethod\n def validate(cls, normalized: str) -> None:\n super().validate(normalized)\n if not normalized.startswith("ADMIN"):\n raise ValueError("Admin token must start with ADMIN")\n\n\n t = AdminToken.parse("admin0000.abcd1111")\n print(type(t).name, t.raw)\n\nWhat’s happening:\n\n- ApiToken.parse() is inherited and stays the single place that defines the contract.\n- AdminToken extends validate() while preserving base validation.\n- The result type matches the class you called (AdminToken).\n\n### Traditional vs modern factory style\nHere’s how I think about it in 2026, especially when your team uses type checkers.\n\n
Traditional approach
\n
—
\n
init grows flags and branches
@classmethod factories returning Self \n
Override init and repeat work
\n
Easy to break call order
\n
Harder for type checkers
Self, typing.override, strict checks \n\nIf you’re on Python 3.12+ (very common now), add typing.override to catch accidental signature mismatches.\n\n### A more complete factory example: parse + enrich + audit\nIn production, factories rarely just “parse and return.” They normalize, enrich, and audit. The trick is to keep the public method stable and push variability into hooks.\n\nPython:\n\n from future import annotations\n from dataclasses import dataclass\n from typing import Self\n import time\n\n @dataclass(frozen=True)\n class Event:\n name: str\n tsms: int\n source: str\n\n @classmethod\n def frompayload(cls, payload: dict[str, object], , source: str) -> Self:\n cleaned = cls.clean(payload)\n cls.validate(cleaned)\n enriched = cls.enrich(cleaned)\n cls.audit(enriched, source=source)\n return cls(\n name=str(enriched["name"]),\n tsms=int(enriched["tsms"]),\n source=source,\n )\n\n @classmethod\n def clean(cls, payload: dict[str, object]) -> dict[str, object]:\n # Normalize keys and set defaults\n out = dict(payload)\n if "tsms" not in out:\n out["tsms"] = int(time.time() 1000)\n return out\n\n @classmethod\n def validate(cls, payload: dict[str, object]) -> None:\n if "name" not in payload:\n raise ValueError("event missing ‘name‘")\n\n @classmethod\n def enrich(cls, payload: dict[str, object]) -> dict[str, object]:\n return payload\n\n @classmethod\n def audit(cls, payload: dict[str, object], , source: str) -> None:\n # Placeholder: in real code this might emit a metric\n pass\n\n\n @dataclass(frozen=True)\n class StrictEvent(Event):\n @classmethod\n def validate(cls, payload: dict[str, object]) -> None:\n super().validate(payload)\n if not isinstance(payload.get("name"), str):\n raise ValueError("event ‘name‘ must be a string")\n\n @classmethod\n def enrich(cls, payload: dict[str, object]) -> dict[str, object]:\n out = super().enrich(payload)\n out = dict(out)\n out["name"] = out["name"].strip()\n return out\n\nThis is the pattern I come back to: one stable entry point, several narrowly scoped hooks, and each hook can be extended cooperatively.\n\n## Registries and Plugin Systems: Extend Without Breaking the Base Contract\nA common reason to extend a class method is to add automatic registration or metadata. I see this a lot in plugin systems: the base class offers register() and create(); subclasses customize identifiers.\n\nHere’s a clean registry pattern that stays extensible:\n\nPython:\n\n from future import annotations\n from dataclasses import dataclass\n from typing import ClassVar, Self\n\n @dataclass\n class Handler:\n name: str\n\n registry: ClassVar[dict[str, type[Handler]]] = {}\n\n @classmethod\n def key(cls) -> str:\n # Default key: class name in lowercase\n return cls.name.lower()\n\n @classmethod\n def register(cls) -> None:\n key = cls.key()\n if key in cls.registry:\n raise ValueError(f"Duplicate handler key: {key}")\n cls.registry[key] = cls\n\n @classmethod\n def create(cls, key: str, name: str) -> Self:\n handlercls = cls.registry[key]\n return handlercls(name=name) # type: ignore[return-value]\n\n\n class EmailHandler(Handler):\n @classmethod\n def key(cls) -> str:\n return "email"\n\n\n class SmsHandler(Handler):\n @classmethod\n def key(cls) -> str:\n return "sms"\n\n\n EmailHandler.register()\n SmsHandler.register()\n\n h = Handler.create("email", name="billing")\n print(type(h).name, h.name)\n\nOutput:\n\n EmailHandler billing\n\nWhere extending fits:\n\n- You can extend register() to add logging, metrics, or validation.\n- You can extend key() to enforce naming rules while still calling super().key().\n\nExample: enforce a prefix policy, without duplicating base logic.\n\nPython:\n\n class InternalHandler(Handler):\n @classmethod\n def key(cls) -> str:\n base = super().key()\n return f"internal:{base}"\n\nThis style keeps the base registry behavior consistent while allowing systematic differences across families of subclasses.\n\n### Production-grade registration: initsubclass + explicit opt-in\nOne pain point with registries is forgetting to call register(). You can reduce that with initsubclass, but I like to keep it explicit and controllable (auto-register everything can bite you in tests or import-time side effects).\n\nHere’s an approach I’ve used: subclasses can opt in by setting a class attribute, and the base class handles the rest.\n\nPython:\n\n from future import annotations\n from dataclasses import dataclass\n from typing import ClassVar\n\n @dataclass\n class Plugin:\n name: str\n\n registry: ClassVar[dict[str, type[Plugin]]] = {}\n autoregister: ClassVar[bool] = False\n\n def initsubclass(cls, kwargs: object) -> None:\n super().initsubclass(kwargs)\n if getattr(cls, "autoregister", False):\n cls.register()\n\n @classmethod\n def key(cls) -> str:\n return cls.name.lower()\n\n @classmethod\n def register(cls) -> None:\n key = cls.key()\n if key in cls.registry:\n raise ValueError(f"Duplicate plugin key: {key}")\n cls.registry[key] = cls\n\n\n class AutoEmailPlugin(Plugin):\n autoregister = True\n\nThis is still “extending class behavior,” but it moves some of the lifecycle wiring to subclass creation time rather than runtime. It’s a useful tool when your plugin system is large.\n\n### Registry edge case: shared state across subclasses (and how to avoid it)\nIn the earlier examples, registry lives on the base class. That’s correct for a global registry, but it’s wrong if each subclass family should have its own registry.\n\nIf you want per-subclass registries, you can initialize storage per subclass in initsubclass:\n\nPython:\n\n class FamilyRegistry:\n registry: dict[str, type[FamilyRegistry]]\n\n def initsubclass(cls, kwargs: object) -> None:\n super().initsubclass(kwargs)\n # Each subclass gets its own independent registry\n cls.registry = {}\n\nNow FamilyA.registry and FamilyB.registry won’t collide. I’ve seen a lot of “why are my tests flaky?” incidents caused by shared class registries leaking between test cases.\n\n## Multiple Inheritance and Cooperative Class Methods (MRO Done Right)\nIf your codebase ever uses mixins, you must think about cooperative inheritance. The moment two parents define the same class method name, super() becomes the only sane way to make them compose.\n\nHere’s the mental model: in cooperative multiple inheritance, each implementation should do its part and then call super() so the next class in the MRO can do its part.\n\nLet’s build a realistic example: configuration loading that combines behavior from mixins.\n\nPython:\n\n from future import annotations\n from typing import Self\n import os\n\n class BaseConfig:\n @classmethod\n def load(cls) -> dict[str, str]:\n # Base layer: empty config\n return {}\n\n\n class EnvConfigMixin(BaseConfig):\n @classmethod\n def load(cls) -> dict[str, str]:\n data = super().load()\n # Add a few env-based keys\n if "APPENV" in os.environ:\n data["env"] = os.environ["APPENV"]\n return data\n\n\n class DefaultsMixin(BaseConfig):\n @classmethod\n def load(cls) -> dict[str, str]:\n data = super().load()\n data.setdefault("env", "dev")\n data.setdefault("region", "us-east")\n return data\n\n\n class AppConfig(EnvConfigMixin, DefaultsMixin, BaseConfig):\n pass\n\n\n print(AppConfig.load())\n\nThis works because:\n\n- Each layer calls super().load().\n- The MRO defines the order: AppConfig -> EnvConfigMixin -> DefaultsMixin -> BaseConfig.\n- Each layer adds or refines data.\n\n### The mistake I see most often\nSomeone writes BaseConfig.load() instead of super().load(). That “works” in single inheritance, and then quietly breaks composition when a mixin is added later.\n\nIf there’s any chance your design will grow mixins, treat super() as non-negotiable.\n\n### A more robust cooperative pattern: accept kwargs in hooks\nA second source of MRO pain is argument drift. One mixin wants a path, another wants a strict flag, and suddenly you’re fighting signatures.\n\nFor cooperative inheritance, I often design mixin class methods to accept kwargs and forward them, while the “leaf” or base class consumes what it needs.\n\nPython:\n\n from future import annotations\n from typing import Any\n\n class LoaderBase:\n @classmethod\n def load(cls, kwargs: Any) -> dict[str, str]:\n return {}\n\n\n class StrictMixin(LoaderBase):\n @classmethod\n def load(cls, kwargs: Any) -> dict[str, str]:\n data = super().load(kwargs)\n if kwargs.get("strict") and "env" not in data:\n raise ValueError("strict mode requires env")\n return data\n\n\n class EnvMixin(LoaderBase):\n @classmethod\n def load(cls, kwargs: Any) -> dict[str, str]:\n data = super().load(kwargs)\n env = kwargs.get("env")\n if isinstance(env, str):\n data["env"] = env\n return data\n\n\n class MyLoader(StrictMixin, EnvMixin, LoaderBase):\n pass\n\n print(MyLoader.load(env="prod", strict=True))\n\nI don’t do this everywhere (it can hide mistakes), but for mixin-heavy frameworks it’s often the least painful way to keep things composable.\n\n### How I debug MRO issues quickly\nWhen behavior looks “wrong” in multiple inheritance, I inspect the MRO and verify which implementation got called.\n\nI’ll usually do at least one of these:\n\nPython:\n\n print([c.name for c in AppConfig.mro()])\n\nand (for really stubborn issues) temporarily add tracing inside each implementation to confirm call order. The point is to stop guessing: the MRO is deterministic, but it’s easy to misread in your head.\n\n## Common Mistakes When Extending Class Methods (and How I Avoid Them)\nThese are the issues I see in production code, along with the fix I recommend.\n\n### 1) Forgetting to preserve return values\nIf the parent returns a value and the child calls super() only for side effects, you can lose behavior.\n\nBad pattern:\n\n class Parent:\n @classmethod\n def build(cls) -> str:\n return "base"\n\n class Child(Parent):\n @classmethod\n def build(cls) -> str:\n super().build() # return value ignored\n return "child"\n\nBetter:\n\n class Child(Parent):\n @classmethod\n def build(cls) -> str:\n return f"{super().build()} + child"\n\nOne reason I like hook methods is that it reduces how often you need to “merge” returns. Instead, the base returns the final value, and the hooks contribute along the way.\n\n### 2) Changing the signature in a subclass\nClass methods participate in polymorphism. If you change parameters in a subclass, you make call sites fragile.\n\nMy rule: keep the same signature for the public class method, and customize behavior through narrower hook methods.\n\nIf you truly need additional subclass-only inputs, either:\n\n- add them as optional keyword-only args in the base signature, or\n- create a separate method name (so callers opt in explicitly).\n\n### 3) Calling super() in the wrong place\nOrder matters. If the base class validates, logs, or sets up invariants, call it first.\n\nI usually pick one of these and stick to it:\n\n- Base-first: super() then add constraints (common for validation)\n- Base-last: do subclass prep then super() (common for building a payload before a shared send)\n\nWhen the order matters, I like to encode it by splitting methods into “pre” and “post” hooks rather than relying on everyone remembering where to put super().\n\n### 4) Confusing @classmethod with @staticmethod\nIf you need subclass awareness (cls), it’s a class method.\n\nI choose @staticmethod only when:\n\n- the function is logically attached to the class namespace, but\n- it does not need cls, and\n- overriding it in subclasses is not part of the design.\n\nIf you mark something @staticmethod and later decide it should be polymorphic, you’ll end up rewriting call sites or introducing awkward adapters.\n\n### 5) Using class attributes as “mutable globals”\nClass methods often interact with class attributes (registries, caches, counters). That’s fine, but be deliberate about where the data lives.\n\nIf each subclass should have its own registry, store it on the subclass, not on the base. You can do that by initializing in initsubclass, or by having the class method ensure the structure exists on cls.\n\nA lightweight pattern I use for per-subclass caches is “initialize on first use”:\n\nPython:\n\n class CachedThing:\n cache: dict[str, object]\n\n @classmethod\n def getcache(cls) -> dict[str, object]:\n # Ensure each subclass has its own cache dict\n if "cache" not in cls.dict:\n cls.cache = {}\n return cls.cache\n\nThat cls.dict check is intentional: it distinguishes “inherited attribute” from “defined on this subclass.”\n\n### 6) Accidentally turning cooperative inheritance into a dead end\nIn mixin designs, one class forgetting to call super() can cut the chain and silently drop behavior from other mixins.\n\nIf a method name is intended to be cooperative, I treat “call super()” as part of the method’s contract. I’ll often encode that expectation in naming (contribute), documentation, and tests.\n\n### 7) Overusing inheritance when composition is clearer\nI’ve seen teams reach for “extend the class method” when what they really needed was a strategy object or a plain function pipeline. Inheritance is great when there’s a stable conceptual hierarchy; it’s messy when you’re just trying to inject a callback.\n\nIf your “subclasses” are really just configuration variants, consider composition. It can still be clean and type-safe, and it avoids MRO surprises entirely.\n\n## Performance and Maintainability Considerations (What Matters in Practice)\nExtending class methods is rarely your bottleneck, but it can become a maintainability bottleneck if you make the call chain too clever. Here’s how I think about it.\n\n### Performance: what to worry about (and what not to)\n- The overhead of a @classmethod call vs a function call is usually negligible unless you’re in very tight loops.\n- The overhead of super() resolution is also usually negligible, but it can add up if you have many layers and you call it extremely frequently.\n- The bigger real-world costs are often in what class methods do: parsing, I/O, regex, JSON, validation, imports.\n\nIf performance is a concern, I prefer these optimizations before I touch inheritance design:\n\n- Cache expensive computed class-level values (compiled regexes, derived schemas).\n- Avoid import-time heavy work in class bodies if it slows startup.\n- Keep class methods thin wrappers around fast helpers (which are easier to test and profile).\n\n### Caching gotcha: per-class vs global\nIf you cache results inside a class method, be explicit about whether the cache is per subclass or shared globally. I’ve seen subtle bugs where Base.cache was shared and polluted by multiple subclasses.\n\nIf the cached result depends on cls (very common), the cache should usually live on cls, not on the base type.\n\n### Maintainability: limit how far “extension chains” can grow\nDeep inheritance chains make it harder to answer basic questions like “where does this behavior come from?”\n\nMy rule of thumb:\n\n- 1–3 layers: usually fine\n- 4–6 layers: start paying attention\n- 7+ layers: I look for a refactor to hooks, composition, or explicit pipelines\n\nMixins are especially easy to overdo because they feel lightweight—until debugging day.\n\n## Alternative Approaches (When Extending a Class Method Isn’t the Best Tool)\nI like extended class methods, but I don’t treat them as the only option. Here are a few alternatives I reach for when the inheritance story is weak.\n\n### 1) Plain functions + explicit dependencies\nIf the “variation” doesn’t need polymorphism, a function is often clearer. Instead of cls.parse(), you pass parsefn.\n\nThis reduces magic, improves testability, and avoids MRO concerns entirely.\n\n### 2) Composition via strategy objects\nInstead of subclassing Config, you create ConfigLoader objects with a consistent interface. You can still keep Config.fromtext() if you want, but it delegates to a strategy.\n\nThis is my go-to when variations multiply (10+ variants) and inheritance starts to feel like bookkeeping.\n\n### 3) functools.singledispatch for type-based behavior\nIf behavior depends on input type rather than subclass type, singledispatch can be a better fit. It’s explicit and doesn’t require an inheritance hierarchy to encode “different cases.”\n\n### 4) init with strict invariants (sometimes it really is enough)\nI don’t hate init. If you can keep initialization simple and avoid flags, it’s fine. But once you see branching, optional parameters that only some callers should use, or lots of “if mode == …” logic, that’s when class method factories start winning.\n\n## When I Do (and Don’t) Reach for Extended Class Methods\nExtending class methods is powerful, but I don’t force it everywhere.\n\nI reach for it when:\n\n- I want a stable public API in a base class, with subclass customization.\n- I’m building alternate constructors that must return the correct subtype.\n- I’m building a registry/plugin system where keys and behavior vary by subclass.\n- I need cooperative behavior across mixins.\n\nI avoid it when:\n\n- the behavior is truly instance-specific (use self).\n- there’s no real inheritance story (a plain function is clearer).\n- subclassing is not intended (composition might be the better fit).\n\nA practical heuristic: if you find yourself overriding a class method and copying 80% of the parent’s body, stop and refactor into a base class method that calls smaller hook class methods. That’s usually the cleanest path.\n\n## A Closing Checklist I Actually Use in Code Reviews\nWhen I review code that extends a class method, I walk through a short checklist:\n\n- Does the base class method rely on cls(...) instead of hard-coding the base class?\n- Does the subclass call super() (and in the right order)?\n- Are return values preserved and extended rather than discarded?\n- Does the subclass keep the same public signature as the base method (and push customization into hooks)?\n- If multiple inheritance/mixins are possible: is this method designed to be cooperative (calls super() and forwards args as needed)?\n- Are class-level mutable structures (registries/caches) intentionally scoped (global vs per subclass), and are tests protected from cross-test leakage?\n- Is the method name and docstring clear about whether overriding is supported and expected?\n\nIf a change fails one of those checks, it doesn’t mean it’s wrong—but it does mean I want a deliberate explanation in the code. Inheritance can be powerful, but I only trust it when the design makes the “extension points” obvious and hard to misuse.


