2026, Jan 04 01:00
How to test Apache Airflow operators with pytest: render Jinja templates (prev_ds, ds) and fix task.run vs execute issues
Learn to test Apache Airflow operators with pytest: prepare runtime context, render Jinja prev_ds/ds, fix attribute mismatches, and avoid failed TaskInstances.
Testing Airflow operators with pytest is a great way to validate templating and context behavior without spinning up the full scheduler stack. A common stumbling block appears when comparing task.execute with task.run: templates like {{ prev_ds }} and {{ ds }} render only when the task runtime is correctly prepared. Below is a minimal case where the test passes but the logs show a failed TaskInstance, which is confusing at first glance.
Problem setup
import datetime
import pytest
from airflow.models import DAG
@pytest.fixture
def fx_dag():
return DAG(
"probe_dag",
default_args={
"owner": "airflow",
"start_date": datetime.datetime(2025, 4, 5),
"end_date": datetime.datetime(2025, 4, 6)
},
schedule=datetime.timedelta(days=1)
)
import datetime
from airflow.models import BaseOperator
from airflow.models.dag import DAG
from airflow.utils import timezone
class RenderProbeOperator(BaseOperator):
template_fields = ("_from_date", "_to_date")
def __init__(self, start_date, end_date, **kwargs):
super().__init__(**kwargs)
self._from_date = start_date
self._to_date = end_date
def execute(self, context):
context["ti"].xcom_push(key="start_date", value=self.from_date)
context["ti"].xcom_push(key="end_date", value=self.to_date)
return context
def test_run_exec(fx_dag: DAG):
op = RenderProbeOperator(
task_id="t_check",
start_date="{{ prev_ds }}",
end_date="{{ ds }}",
dag=fx_dag
)
op.run(
start_date=fx_dag.default_args["start_date"],
end_date=fx_dag.default_args["end_date"]
)
exp_start = datetime.datetime(2025, 4, 5, tzinfo=timezone.utc)
exp_end = datetime.datetime(2025, 4, 6, tzinfo=timezone.utc)
assert op.start_date == exp_start
assert op.end_date == exp_end
Despite assertions passing, the logs show the task in failed state:
[2025-04-26T12:51:18.289+0000] {taskinstance.py:2604} INFO - Dependencies not met for <TaskInstance: probe_dag.t_check manual__2025-04-05T00:00:00+00:00 [failed]>, dependency 'Task Instance State' FAILED: Task is in the 'failed' state.
What is actually wrong
There are two independent issues. First, there is an attribute mismatch inside the operator. The constructor stores values in underscored attributes, while execute looks up non-underscored names. That discrepancy breaks the operator’s execution and marks the TaskInstance as failed, which explains the “dependencies not met” messages on subsequent runs because the state is already failed.
Second, template rendering happens during task.run with a proper runtime context. Passing Jinja expressions like {{ prev_ds }} and {{ ds }} is fine, but they must be rendered against an execution context. Without the right runtime parameters, the values you want to verify won’t be reliably materialized.
Fixing the operator and preparing the run
Correct the attribute access so execute uses the same attributes populated in __init__. Then, run the task with a context suitable for templating. This ensures {{ prev_ds }} and {{ ds }} resolve and are pushed via XCom as expected.
from airflow.models import BaseOperator
from airflow.models.dag import DAG
class RenderProbeOperator(BaseOperator):
template_fields = ("_from_date", "_to_date")
def __init__(self, start_date, end_date, **kwargs):
super().__init__(**kwargs)
self._from_date = start_date
self._to_date = end_date
def execute(self, context):
context["ti"].xcom_push(key="start_date", value=self._from_date)
context["ti"].xcom_push(key="end_date", value=self._to_date)
return context
import datetime
from airflow.models.dag import DAG
def test_run_exec(fx_dag: DAG):
op = RenderProbeOperator(
task_id="t_check",
start_date="{{ prev_ds }}",
end_date="{{ ds }}",
dag=fx_dag
)
run_dt = fx_dag.default_args["start_date"]
stop_dt = fx_dag.default_args["end_date"]
op.run(
start_date=run_dt,
end_date=stop_dt,
execution_date=run_dt,
run_id=f"test_run_{run_dt.isoformat()}",
ignore_first_depends_on_past=True
)
If you are running Airflow 2.10.5, passing execution_date to BaseOperator.run leads to a runtime error: TypeError: BaseOperator.run() got an unexpected keyword argument 'execution_date'. Keep that in mind when preparing the runtime context.
Why this matters
Comparing task.execute and task.run is useful when you want to see how Airflow resolves Jinja templates like prev_ds and ds. The former runs your operator logic directly, while the latter sets up the runtime where templating is applied and task state is tracked. If the operator references the wrong attributes, the task fails even if your test assertions look fine, and the scheduler dutifully reports dependency failures due to the failed state.
Takeaways
Align attribute names in the operator with what you set in the constructor and list in template_fields. Provide an appropriate context when invoking task.run so Jinja variables are rendered. Watch the logs: if you see a failed TaskInstance and unmet dependencies, look for exceptions in execute and mismatches in attribute usage. And be aware of the task.run signature in your Airflow version so you don’t pass unsupported arguments.