2025, Dec 19 19:00

Stop misusing typeguard.check_type: validate at runtime in Python and type safely with overloads

Learn why typeguard.check_type should return values, not booleans. Build a safe helper with overloads for runtime type validation in Python avoid falsy bugs.

When you mix static typing with runtime guarantees, Python's cast(type, obj) quickly hits a hard limit: it provides no runtime verification. You either trust the annotation or you don't. If you actually want to validate values at runtime while keeping type checkers happy, typeguard.check_type is the right tool—provided you use it correctly and give it a type-safe wrapper.

Problem: a helper that looks right but fails at runtime and in typing

Consider a small utility designed to check a value against a type hint and return it if it matches. The intent is to get a clear, single-expression flow instead of sprinkling if checks everywhere. A straightforward attempt might look like this:

from types import GenericAlias, UnionType
from typing import Any, cast

from annotated_types import T
from typeguard import check_type

def enforce_kind(item, hint):  # noqa: ANN401 This function should hold Any value
    """Return the item if it matches the expected type, otherwise raise TypeError."""
    if check_type(item, hint):
        return cast("T", item)
    message = f"Type check failed: type({item}) is {type(item)}, expected {hint}"
    raise TypeError(message)

This reads nicely at a glance, but it bakes in a subtle bug and doesn’t give type checkers enough information about the return type.

Why it breaks: what check_type actually returns

The core is how typeguard.check_type behaves. It does not return a boolean. It returns the original value when the check succeeds and raises TypeError when it fails. That means any truthiness check around it is wrong and will misfire on falsy values. For example, a boolean False or an empty dict will pass the type check but evaluate to false in the if condition, incorrectly triggering the error path.

This also makes a more direct call style perfectly valid and clearer: obj = check_type(obj, type). That pattern communicates both the runtime validation and the refined type after the call.

Before vs. after at the call site

With a helper in place, call sites can be compact. The same flow without the helper is just as linear when you let check_type return the value:

# Using a helper
payload = enforce_kind(fetch_payload(), dict[str])
token = enforce_kind(payload.get("key"), AssuredKey)
print(token)

# Using check_type directly
payload = check_type(fetch_payload(), dict[str])
token = check_type(payload.get("key"), AssuredKey)
print(token)

Both variants avoid branching and keep the value flow obvious. The key is to rely on the return value rather than treating check_type as a predicate.

Typing the helper with overloads

The function signature of check_type drives the correct typing. Its overloads clarify when the return value is the narrowed type and when it stays as Any:

@overload
def check_type(
    value: object,
    expected_type: type[T],
    *,
    forward_ref_policy: ForwardRefPolicy = ...,
    typecheck_fail_callback: TypeCheckFailCallback | None = ...,
    collection_check_strategy: CollectionCheckStrategy = ...,
) -> T: ...


@overload
def check_type(
    value: object,
    expected_type: Any,
    *,
    forward_ref_policy: ForwardRefPolicy = ...,
    typecheck_fail_callback: TypeCheckFailCallback | None = ...,
    collection_check_strategy: CollectionCheckStrategy = ...,
) -> Any: ...

A helper that mirrors this approach and simply returns check_type’s result is both correct at runtime and friendly to static analyzers.

Fix: return the validated value and express the overloads

The robust implementation keeps check_type as the single source of truth and optionally rephrases the error message. It also defines overloads for common expected type shapes:

from types import GenericAlias, UnionType
from typing import Any, overload

from annotated_types import T
from typeguard import check_type

@overload
def affirm_kind(obj: object, expected: type[T]) -> T: ...

@overload
def affirm_kind(obj: object, expected: GenericAlias) -> GenericAlias: ...

@overload
def affirm_kind(obj: object, expected: UnionType) -> UnionType: ...

def affirm_kind(obj, expected):
    """Validate and return the object when types match, else raise TypeError."""
    try:
        return check_type(obj, expected)
    except TypeError:
        detail = f"Type check failed: type({obj}) is {type(obj)}, expected {expected}"
        raise TypeError(detail)

This version correctly handles falsy values, preserves the original object, and exposes enough typing information to guide static analysis.

Why this nuance matters

Getting this wrong introduces a class of bugs that only appear with falsy but valid values. Using check_type as a predicate discards the result and conflates validation with truthiness, which is not what the function does. Returning the value aligns runtime behavior with developer intent and produces clearer code paths. With overloads in place, type checkers can understand the narrowing and reduce noise in annotations.

Conclusion

If you need runtime type validation in Python, rely on check_type’s contract: it returns the input on success and raises TypeError on failure. Structure your helper to return that value and express overloads for the expected types you care about. At call sites, prefer straight-line assignments to boolean checks. This keeps the runtime semantics correct, makes code easier to read, and gives static analyzers the hints they need without extra ceremony.