2025, Dec 30 11:00

How to Make BehaveX after_all Run Once in Parallel and Generate a Single, Consolidated Allure Report

Learn why BehaveX runs after_all per worker in parallel and how to fix it: use a file lock with a flag or run Allure report generation once after tests finish.

When BehaveX runs features or scenarios in parallel, the after_all hook may unexpectedly execute more than once. If you rely on this hook to bump counters or compile a single Allure report, multiple processes calling the same logic quickly turns into duplicate work and conflicting outputs.

Minimal example that triggers the issue

The following snippet illustrates the pattern: one global hook updates a counter, timestamps a report directory, and generates a consolidated Allure report. In parallel mode, this ends up running per worker, not once per entire test run.

@async_run_until_complete
async def after_all(ctx):
    bump_counter("NumbersAutomation.txt")
    ts = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
    report_dir = os.path.join("features/reports", ts)
    os.system(f"allure generate --single-file allure_results -o {report_dir}")

Command used to parallelize with BehaveX:

behavex features -t=~@skip --parallel-processes=4 --show-progress-bar --parallel-scheme=feature

What’s going on

In parallel mode, each process has its own lifecycle and runs its own hooks. That means after_all is not guaranteed to be a single, global “run-once” point across all workers. The result is repeated counter increments and multiple invocations of the Allure generation step. In short, after_all isn’t reliable in parallel mode for singleton post-run tasks.

Two practical ways to make it deterministic

The first option is to guard the after_all logic with an interprocess lock and a simple flag file. Only the first process that acquires the lock and doesn’t find the flag file performs the expensive work; everyone else skips it. This solution keeps the hook-driven flow but enforces singleton semantics.

import os
from datetime import datetime
from filelock import FileLock

@async_run_until_complete
async def after_all(ctx):
    lock_file = "after_all.lock"
    with FileLock(lock_file):
        if not os.path.exists("after_all_ran.flag"):
            bump_counter("NumbersAutomation.txt")
            ts = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
            report_dir = os.path.join("features/reports", ts)
            os.system(f"allure generate --single-file allure_results -o {report_dir}")
            with open("after_all_ran.flag", "w") as fh:
                fh.write("done")

The second option is to remove the singleton work from the hook entirely and run it once after BehaveX finishes. This avoids coordination between processes and keeps the pipeline straightforward: first run tests, then generate the report.

behavex features -t=~@skip --parallel-processes=4 --show-progress-bar --parallel-scheme=feature
allure generate --single-file allure_results -o features/reports/$(date +%Y-%m-%d_%H-%M-%S)

It’s also valid to orchestrate the same flow in a small wrapper script that starts BehaveX, waits for completion, and then invokes the post-run logic.

Why this matters

Parallel execution is great for speed, but it changes lifecycle semantics. Anything that should happen exactly once per entire run—updating a shared counter, writing a timestamped target path, or generating a unified Allure report—must be protected. Otherwise, you end up with duplicated work, racy file writes, and inconsistent outputs.

Conclusion

If you need a single post-run step while using BehaveX with multiple processes, either enforce a run-once guard with a file-based lock and a flag, or move the action to a separate command executed after the test run. The first approach preserves an in-hook workflow; the second keeps the pipeline simple and avoids cross-process coordination. Choose whatever fits your CI and tooling, but make sure the “one and only one” invariant is explicit when you turn on parallelism.