2025, Dec 27 07:00
Pytest pattern for VRP testing: per-field rules, thresholds, and clean dataclass outcomes
Learn a lightweight pytest pattern for VRP testing: keep dataclasses simple and assert per-field rules or thresholds with flexible route optimization checks.
Testing route optimization often starts with a clean equality check and quickly runs into reality: different scenarios demand different acceptance criteria. Sometimes you need exact route shapes, sometimes it’s enough to be under a time threshold. Here is a compact pattern that keeps your data model lightweight and lets pytest assert exactly what matters for each scenario.
The brittle equality-based approach
A straightforward way is to encode expectations directly into a dataclass and override equality so the object can be compared in a single assert. That looks tidy, but it locks you into one validation strategy — strict equality — and ignores richer cases like thresholds or custom predicates.
from dataclasses import dataclass, fields
from typing import Optional
@dataclass(slots=True)
class VrpOutcome:
"""
Container for VRP solution artifacts.
"""
engine_time: Optional[float] = None
sum_travel_time: Optional[int] = None
leg_durations: Optional[list[int]] = None
path_nodes: Optional[list[list[int]]] = None
def __post_init__(self):
if self.leg_durations is not None and self.sum_travel_time is None:
self.sum_travel_time = sum(self.leg_durations)
def __eq__(self, other):
if not isinstance(other, VrpOutcome):
raise TypeError(f"Cannot compare {type(self)} with {type(other)}")
same = True
for fld in fields(VrpOutcome):
left = getattr(self, fld.name)
right = getattr(other, fld.name)
if left is None or right is None:
continue
if right != left:
same = False
break
return same
# Usage in a test
case_data, expected_outcome = build_case()
predicted_outcome = run_vrp(case_data)
assert expected_outcome == predicted_outcome, "Solution does not match expected results"
Why this fails in real scenarios
Strict equality is great when you know exact answers. But optimization outputs are often acceptable within bounds or under constraints. For instance, you might not care about the exact best travel time, only that it stays below a limit. Encoding all of that into a single __eq__ turns the model into a grab-bag of rules, is hard to read, and mixes test policy with data.
A flexible check layer for pytest
A cleaner approach is to keep the dataclass focused on storing results and move validation logic into a dedicated comparison function. That function takes a mapping of field names to custom checks. If a field has a rule, it runs the rule; if not, it uses equality when an expected value is provided. If the expected value is None and no rule is given, the field is skipped.
from dataclasses import dataclass, fields
from typing import Optional, Callable, Any
@dataclass(slots=True)
class VrpOutcome:
engine_time: Optional[float] = None
sum_travel_time: Optional[int] = None
leg_durations: Optional[list[int]] = None
path_nodes: Optional[list[list[int]]] = None
def __post_init__(self):
if self.leg_durations is not None and self.sum_travel_time is None:
self.sum_travel_time = sum(self.leg_durations)
def review_outcomes(actual: VrpOutcome,
expected: VrpOutcome,
rules_map: dict[str, Callable[[Any], bool]] = None):
rules_map = rules_map or {}
for fld in fields(VrpOutcome):
val_actual = getattr(actual, fld.name)
val_expected = getattr(expected, fld.name)
if fld.name in rules_map:
if not rules_map[fld.name](val_actual):
return False, f"{fld.name} failed check: Got {val_actual}"
elif val_expected is not None and val_actual != val_expected:
return False, f"{fld.name} does not match: Expected {val_expected}, got {val_actual}"
return True, ""
Example: threshold-based acceptance
When the exact optimum is unknown but a bound is acceptable, you can express that as a predicate for the field in question. The assertion remains a single line with a helpful failure message.
def test_case_three():
case_data, expected_outcome = build_case()
predicted_outcome = run_vrp(case_data)
acceptance = {
"sum_travel_time": lambda v: v < 20 # anything under 20 is acceptable
}
ok, msg = review_outcomes(predicted_outcome, expected_outcome, acceptance)
assert ok, msg
If you prefer expressing intent with operators rather than ad-hoc lambdas, you can use an operator such as lt instead of equality for specific fields and wire that into the same comparison layer.
Why this matters
Separating storage from validation keeps the model simple and your tests expressive. You can tailor checks per scenario without bloating the data structure or coupling it to testing concerns. The approach scales from exact structural comparisons, like matching route lists, to looser constraints, like time thresholds, while preserving clear, field-specific failure messages.
Takeaways
Use a dataclass to hold VRP results and compute derived values such as total travel time in a single place. Move assertions into a dedicated function that applies per-field predicates when provided and falls back to equality when expectations are explicit. Keep expected values as None to skip checks on a field, and define concise per-scenario rules to match how the optimizer is actually evaluated. This way, pytest stays readable, diagnostics are precise, and your test suite reflects the real acceptance criteria of your optimization logic.