2025, Nov 18 07:00

Why CatBoost Training Stops After One Iteration and How to Use Callbacks Correctly

Learn why CatBoost stops after first iteration when after_iteration returns None, and how returning True, storing state, and using info.iteration fix callbacks.

CatBoost exposes callbacks to hook into training and react after each boosting step. A common use case is updating a GUI progress bar. However, wiring the callback naively can make training stop after the very first iteration, which looks like a crash but has a much simpler cause.

Repro: training stops after one iteration

The following snippet increments a counter on every iteration and passes the handler to fit. The counter does move, but CatBoost halts right away.

class StepHook:
def after_iteration(self, evt):
global tick_count
tick_count += 1

observer = StepHook()
clf.fit(x_train, eval_set=x_valid, callbacks=[observer])
# Alternatively, construct inline
clf.fit(x_train, eval_set=x_valid, callbacks=[StepHook()])

What actually happens and why

CatBoost’s callback contract uses the return value of after_iteration to decide whether to proceed. Continuing requires a True, stopping uses False. If nothing is returned, Python yields None, and that is treated as False. The result is an immediate stop after the first call.

This behavior is illustrated by examples such as MetricsCheckerCallback and EarlyStopCallback in CatBoost’s own repository. One example explicitly returns True to keep going; the other returns a boolean condition to stop at a chosen iteration.

The fix

Return True at the end of after_iteration when you want training to continue.

class StepHook:
def after_iteration(self, evt):
global tick_count
tick_count += 1
return True

Keeping state inside the callback

Avoiding globals makes the code easier to reason about. Store the progress counter on the object and access it after fit completes.

class StepTracker:
def __init__(self):
self.tally = 0

def after_iteration(self, evt):
self.tally += 1
return True

tracker = StepTracker()
clf.fit(x_train, eval_set=x_valid, callbacks=[tracker])
print(tracker.tally)

Reading the current iteration from the callback

Repository examples also show that the info object passed to after_iteration exposes iteration. You can set your progress directly from it.

class IterMeter:
def __init__(self):
self.tally = 0

def after_iteration(self, evt):
self.tally = evt.iteration
return True

Reference patterns from CatBoost examples

The following patterns demonstrate the return-based control flow used by CatBoost callbacks. The first example validates metrics structure and explicitly returns True to keep training alive.

class MetricsGuard:
def after_iteration(self, info):
for dataset_name in ['learn', 'validation_0', 'validation_1']:
assert dataset_name in info.metrics
for m_name in metric_list:
assert m_name in info.metrics[dataset_name]
assert len(info.metrics[dataset_name][m_name]) == info.iteration
return True

trainer.fit(ds_train, y_train,
callbacks=[MetricsGuard()],
eval_set=[val0, val1])

The next pattern uses the return value to stop training at a chosen step.

class HaltOnIter:
def __init__(self, stop_at):
self._stop_at = stop_at

def after_iteration(self, info):
return info.iteration != self._stop_at

trainer.fit(ds_train, y_train, callbacks=[
HaltOnIter(7),
HaltOnIter(5),
HaltOnIter(6)
])

Why this detail matters

Knowing that the callback’s return value controls the training loop prevents head-scratching when runs terminate after a single iteration. It also opens the door to precise control over the process: you can keep iterating, halt at a specific point, or implement custom stop rules, all using the same mechanism.

Wrap-up

If you want to continue training from after_iteration, always return True. Keep progress inside the callback object if you need to read it later, and consider using info.iteration when you prefer exact iteration indexing. With these small adjustments, updating a GUI progress bar or injecting custom logic into CatBoost’s training loop becomes straightforward and robust.