2025, Nov 30 19:00
Make Python logging in Jupyter predictable: understand root logger vs module helpers and configure explicit handlers
Why logger.info is silent in Jupyter until logging.info runs: basicConfig and root logger explained. Use a named logger and handlers for predictable output.
Python logging in Jupyter can feel unpredictable when messages don’t show up until “something” flips a switch. A common case looks exactly like this: the first call to logging.info suddenly makes everything start working, while direct logger.info calls before that are silent. Here’s what’s going on and how to make it reliable from the start.
Reproducing the issue
The following code configures the root logger’s level and then tries to emit a message. In a Jupyter notebook session (Python 3.12, logging 0.5.1.2), nothing is printed.
import logging
core_log = logging.getLogger()
core_log.setLevel(logging.INFO)
core_log.info("logging test")
But if you invoke the module-level function once, output appears immediately and subsequent calls keep working:
import logging
root_alias = logging.getLogger()
root_alias.setLevel(logging.INFO)
logging.info("logging test")
# Now this also shows up
root_alias.info("another logging test")
Why this happens
The behavior stems from how the module-level helpers in logging work compared to calling a logger instance method directly. The module-level helpers such as logging.info dispatch to the root logger and, if necessary, bootstrap a default configuration. The relevant part is captured in this excerpt:
This is a convenience function that calls Logger.debug(), on the root logger. The handling of the arguments is in every way identical to what is described in that method.
The only difference is that if the root logger has no handlers, then basicConfig() is called, prior to calling debug on the root logger.
The same behavior applies to logging.info. When you call logging.getLogger() with no arguments, you get the root logger. In both examples above you are using that same root logger. However, no handler is attached until a module-level convenience function like logging.info runs, at which point basicConfig is invoked automatically and a default handler is attached. Before that happens, root_alias.info or core_log.info has nowhere to send its output, which is why nothing appears.
A better pattern
Instead of relying on the root logger’s implicit setup, create a named logger and attach a handler explicitly. This avoids silent failures and keeps configuration under your control.
import logging
app_log = logging.getLogger(__name__)
app_log.setLevel(logging.INFO)
stream_out = logging.StreamHandler()
app_log.addHandler(stream_out)
app_log.info("ready to log with an explicit handler")
This approach also scales naturally. A logger can have multiple handlers, and each handler can have its own log_level. If you need to tune the output format, attach a formatter to a handler with handler.setFormatter.
Why you should care
Relying on the implicit side effect of module-level helpers can make logging appear non-deterministic, especially in notebooks, scratch scripts, or the REPL. A single call to logging.info quietly initializes the root logger via basicConfig, which masks configuration gaps and hides the real cause of missing output. Understanding that difference prevents silent logs and makes behavior consistent across cells and sessions, and it keeps the boundary between configuration and use clear.
Practical takeaways
Create a dedicated logger with logging.getLogger(__name__) and attach at least one handler before logging. Set clear levels on the logger you own. If you choose the root logger, configure it explicitly rather than relying on an incidental call to logging.info. And pick variable names that aren’t easily confused with the logging module itself; using names like app_log or my_logger keeps intent obvious at a glance.
Done this way, logging stops being “magical” and starts being predictable, both in Jupyter and outside it.