2025, Nov 16 05:00
How to avoid closing sys.stdout when using with: non-closing wrappers, nullcontext, and os.dup
Learn to forward bytes to file-like targets without closing sys.stdout: use a non-closing wrapper, contextlib.nullcontext, or os.dup, with generator CM caveats.
When you pass sys.stdout (or any open stream) into code that wraps it in a with block, you risk having it closed underneath you. That’s fine for temporary files, but a disaster for process-wide streams. The task is to forward bytes to a file-like object without letting the context manager close the underlying descriptor.
Minimal example that triggers the problem
Consider a generic function that consumes an iterator of byte chunks and writes them to a file-like object. The function expects a context manager and uses with, which is the root of the issue when you hand it sys.stdout.buffer.
import io
from typing import IO
def pump_bytes(target: IO[bytes], source):
with target as handle:
for chunk in source:
handle.write(chunk)
Passing sys.stdout.buffer works once and then fails on the next use because the with block closes it:
import sys
buf = sys.stdout.buffer
pump_bytes(buf, io.BytesIO(b"text\n")) # writes: text
pump_bytes(buf, io.BytesIO(b"more text\n")) # ValueError: I/O operation on closed file.
Why it happens
A with block drives the context management protocol: it calls __enter__ at the start and __exit__ at the end. For file-like objects, __exit__ typically closes the stream. That default behavior is exactly what you don’t want for global streams like sys.stdout. The goal is to keep the writing logic intact while preventing close in the non-exceptional path.
A non-closing wrapper around context managers
If changing pump_bytes isn’t an option, a thin wrapper that intercepts __exit__ solves it. It forwards everything to the wrapped object except for the no-exception exit path, where it intentionally does nothing.
import io
import sys
from typing import IO
class KeepOpenWrapper:
def __init__(self, inner):
self._inner = inner
def __getattr__(self, name):
return getattr(self._inner, name)
def __enter__(self):
return self._inner.__enter__()
def __exit__(self, exc_type, exc_value, tb):
if exc_type is not None:
# Defer to the original cleanup on exceptions
return self._inner.__exit__(exc_type, exc_value, tb)
# Otherwise, do not close
def pump_bytes(target: IO[bytes], source):
with target as handle:
for chunk in source:
handle.write(chunk)
nonclosing_stdout = KeepOpenWrapper(sys.stdout.buffer)
pump_bytes(nonclosing_stdout, io.BytesIO(b"text\n")) # writes: text
pump_bytes(nonclosing_stdout, io.BytesIO(b"more text\n")) # writes: more text
This keeps sys.stdout.buffer usable across multiple calls by suppressing the close that would normally happen at the end of the with block.
Important caveat: generator-based context managers
There’s a case where a wrapper like this can’t help. If the object you’re wrapping is created via @contextlib.contextmanager, the close often happens in a finally block inside the generator, and you can’t bypass that from outside.
import contextlib
class Printer:
def __init__(self):
print("Opening Printer")
def write(self, content):
print(content)
def close(self):
print("Closing Printer")
@contextlib.contextmanager
def managed_printer():
p = Printer()
try:
yield p
finally:
# KeepOpenWrapper cannot stop this from running.
p.close()
import io
def pump_bytes(target, source):
with target as handle:
for chunk in source:
handle.write(chunk)
pump_bytes(managed_printer(), io.BytesIO(b"some text"))
pump_bytes(KeepOpenWrapper(managed_printer()), io.BytesIO(b"some text")) # still closes
Output:
Opening Printer b'some text' Closing Printer Opening Printer b'some text' Closing Printer
Standard-library no-op context manager
When the object is already a proper file-like that you don’t want to close, the standard library offers a ready-made solution: contextlib.nullcontext. It returns the object from __enter__ and its __exit__ is a no-op, which makes it ideal for wrapping sys.stdout.buffer.
import contextlib
import io
import sys
cm_stdout = contextlib.nullcontext(sys.stdout.buffer)
def pump_bytes(target, source):
with target as handle:
for chunk in source:
handle.write(chunk)
pump_bytes(cm_stdout, io.BytesIO(b"text\n")) # writes: text
pump_bytes(cm_stdout, io.BytesIO(b"more text\n")) # writes: more text
Duplicating stdout so the original stays open
If what you really want is a separate handle that can be closed independently, duplicate the underlying file descriptor. Closing the duplicate won’t affect the original.
import io
import os
import sys
fd_clone = os.dup(sys.stdout.fileno())
with os.fdopen(fd_clone, mode="wb") as dup_stream:
# pump_bytes will close only the duplicate
pump_bytes(dup_stream, io.BytesIO(b"text\n"))
print("Original stdout still open!")
Why this matters
When you build higher-level I/O flows—such as a subprocess.run-like interface over a websocket that shuttles data to and from a remote process—you need to preserve the semantics of whether a stream should be closed. Some sinks, like open("file"), must be closed. Others, like sys.stdout or descriptor 1, must remain open. Being able to pass that intent down through layers without threading extra flags everywhere keeps the design clean and predictable.
Practical takeaways
If a function you need to call uses with on the provided file-like object and you must prevent closure, wrap the object with a context manager that no-ops on normal exit. contextlib.nullcontext is the simplest option when you’re passing an existing stream like sys.stdout.buffer. If you actually need a separate closable handle that leaves the original intact, duplicate the descriptor with os.dup and proceed normally. Be aware that generator-based context managers that perform cleanup in a finally block aren’t suppressible from the outside; in such cases, the closing behavior is part of the manager’s contract.