2026, Jan 02 17:00

How to wait for a Python multiprocessing child to start: fix delayed output with Event and flush

Why Python multiprocessing children print late after start(); sync with multiprocessing.Event, skip time.sleep, and flush stdout for timely, reliable output.

Starting a child process with multiprocessing in Python can be surprising the first time you watch the output. You call start(), expect the worker to print immediately, and instead the parent keeps moving, prompts for input, and only then the child’s output appears. Let’s unpack why this happens and how to correctly wait for a child process to be ready before the parent proceeds.

Reproducing the behavior

The following minimal example shows the parent starting a child process and then prompting for input twice:

from multiprocessing import Process


def run_task():
    print("Worker running")


if __name__ == "__main__":
    proc = Process(target=run_task)
    proc.start()
    input("1...")
    input("2...")
    proc.join()

On Python 3.13, Windows x64, it is possible to see both prompts first and only then the child’s "Worker running" message. In other words, start() did not block until the child was fully initialized and printing.

What’s really going on

This behavior is normal for multiprocessing: after you call start(), the parent and child proceed independently. The parent is free to keep running, and the child needs a moment to spawn and reach the print statement. If you expect strict sequencing, a plain start() call is not the synchronization primitive you’re looking for.

Environment can also matter. Running the script directly with python.exe may behave as you expect, while running inside certain IDEs can alter the timing. There is a known issue in PyCharm that can affect this scenario, and running the same code outside PyCharm made the observed delay disappear.

Artificial delays do not solve the correctness problem. A time.sleep() after start() does not guarantee the process has begun executing the desired code. If you must be sure, you need explicit synchronization. And if stdout appears late, remember that output can be buffered; using print(..., flush=True) forces the text to appear immediately.

How to ensure the child is actually running

To make the parent wait until the child reaches a specific point, coordinate with a multiprocessing.Event. The child signals when it has started, and the parent waits for that signal before proceeding. This establishes a clear happens-before relationship without guessing or relying on timing.

import time
from multiprocessing import Process, Event


def job(notify_ready):
    print("Worker started")
    notify_ready.set()
    print("Worker is doing some work")
    time.sleep(2)


if __name__ == "__main__":
    ready_flag = Event()
    proc = Process(target=job, args=(ready_flag,))
    proc.start()
    ready_flag.wait()
    print("Worker has started. Continuing main process.")
    print("Waiting for worker to finish")
    proc.join()

This pattern ensures the parent proceeds only after the child has positively indicated it is running. When you need ongoing coordination rather than a one-time "ready" signal, communicating via a work queue and a stop event is a common approach.

Why this matters

Relying on implicit ordering often leads to brittle code. Multiprocessing is designed for independent execution; the parent and child do not block one another unless you explicitly synchronize them. Depending on timing, IDE behavior, or output buffering will produce flaky tests, intermittent hangs, or confusing console output. Using explicit synchronization primitives makes behavior deterministic and portable.

Takeaways

If you need the child to be fully started before the parent continues, use multiprocessing.Event or a similar synchronization primitive. Avoid time-based delays as a substitute for synchronization. If output appears late, try print(..., flush=True). And when behavior looks odd inside an IDE, run the same script directly with python.exe to rule out environment-specific interference. For more involved workflows, consider coordinating work with a queue and a stop event to manage lifecycle and shutdown cleanly.