2025, Sep 29 15:00
Freezing parameters in scipy.optimize.curve_fit with functools.partial: why it breaks with keywords and a clean wrapper fix
Learn why functools.partial can make scipy.optimize.curve_fit fail with ValueError about fit parameters, and use a robust wrapper to freeze parameters by name safely.
Freezing one or more parameters while fitting a model is a common need. With scipy.optimize.curve_fit many developers reach for functools.partial to fix certain arguments by name. This works in some cases, but breaks in others in a way that is surprising at first glance. Below is a concise explanation of why it happens and a robust way to fix it without changing the model logic.
Minimal example that shows the issue
First, the straightforward flow works as expected when fixing a parameter positionally.
from scipy.optimize import curve_fit
from functools import partial
x_vals = [0] * 5
y_vals = x_vals
def model_fn(x, a, b):
    return x + a + b
fixed_pos = partial(model_fn, 2)
opt_pos, _ = curve_fit(fixed_pos, x_vals, y_vals)
print(opt_pos)  # [-2.]
Fixing by keyword sometimes works, sometimes not. Freezing b by name is fine:
from scipy.optimize import curve_fit
from functools import partial
x_vals = [0] * 5
y_vals = x_vals
def model_fn(x, a, b):
    return x + a + b
fixed_b = partial(model_fn, b=2)
opt_b, _ = curve_fit(fixed_b, x_vals, y_vals)
print(opt_b)  # [-2.]
But freezing a by name leads to an error:
from scipy.optimize import curve_fit
from functools import partial
x_vals = [0] * 5
y_vals = x_vals
def model_fn(x, a, b):
    return x + a + b
fixed_a = partial(model_fn, a=2)
opt_a, _ = curve_fit(fixed_a, x_vals, y_vals)  # raises ValueError: Unable to determine number of fit parameters.
What is actually going on
curve_fit introspects the call signature of the function it is given to decide which arguments are fed with data and which are free parameters to optimize. Specifically, it uses inspect.signature and expects at least two positional parameters: one that receives the x data, and one or more that remain to be optimized.
Using partial affects that signature. Positional binding preserves remaining parameters as positional; keyword binding can convert remaining parameters to KEYWORD_ONLY. When there are no positional parameters left for curve_fit to optimize, it raises the “Unable to determine number of fit parameters” error.
from functools import partial
from inspect import signature
def model_fn(x, a, b):
    pass
q1 = partial(model_fn, 1)       # bind x positionally
q2 = partial(model_fn, a=1)     # bind a by keyword
q3 = partial(model_fn, b=1)     # bind b by keyword
print(*[f"{p.name}: {p.kind.name}" for p in signature(q1).parameters.values()], sep=", ")
print(*[f"{p.name}: {p.kind.name}" for p in signature(q2).parameters.values()], sep=", ")
print(*[f"{p.name}: {p.kind.name}" for p in signature(q3).parameters.values()], sep=", ")
From these signatures you can see the pattern: when a is fixed by keyword, x stays positional but the rest become KEYWORD_ONLY, leaving nothing positional to optimize; when b is fixed by keyword, x and a remain positional, so curve_fit can pass data to x and optimize a.
A practical workaround that keeps keyword-style freezing
If you want to freeze parameters by name and still present curve_fit with a clean positional interface, wrap the model so that the fixed parameters are removed from the signature while the remaining ones stay positional. The helper below preserves the model’s logic and only adjusts the callable’s signature for the optimizer.
import inspect
def lock_params(target, **fixed):
    sig0 = inspect.signature(target)
    pos_kinds = (
        inspect.Parameter.POSITIONAL_OR_KEYWORD,
        inspect.Parameter.POSITIONAL_ONLY,
    )
    posnames = [p.name for p in sig0.parameters.values() if p.kind in pos_kinds]
    remain_params = [
        inspect.Parameter(name, inspect.Parameter.POSITIONAL_OR_KEYWORD)
        for name in posnames
        if name not in fixed
    ]
    sig_new = inspect.Signature(remain_params)
    def bridge(*args, **kwargs):
        bound = sig_new.bind(*args, **kwargs)
        bound.apply_defaults()
        return target(**bound.arguments, **fixed)
    bridge.__signature__ = sig_new
    bridge.__name__ = target.__name__
    return bridge
Using it looks like this:
from scipy.optimize import curve_fit
x_vals = [0] * 5
y_vals = [0] * 5
def model_fn(x, a, b):
    return x + a + b
lock_x = lock_params(model_fn, x=1)
lock_a = lock_params(model_fn, a=1)
lock_b = lock_params(model_fn, b=1)
print(curve_fit(lock_x, x_vals, y_vals)[0])  # [-1.]
print(curve_fit(lock_a, x_vals, y_vals)[0])  # [-1.]
print(curve_fit(lock_b, x_vals, y_vals)[0])  # [-1.]
This maintains the same computational logic while ensuring curve_fit sees at least one positional parameter to fit.
Why this detail matters
Model fitting pipelines often start simple and grow complex as constraints and priors are added. If the function signature silently moves parameters to KEYWORD_ONLY, an optimizer that relies on positional parameters will fail in non-obvious ways. Understanding that curve_fit inspects the callable’s signature helps avoid brittle wrappers and saves time debugging “no fit parameter” errors.
Takeaways
If you freeze parameters with functools.partial and run into ValueError about determining fit parameters, you are likely presenting a callable with no remaining positional arguments to optimize. Binding by position or freezing parameters with a wrapper that preserves positional arguments resolves the problem cleanly. When in doubt, inspect.signature on your callable shows exactly what curve_fit will see and is the quickest way to confirm whether the optimizer has at least one positional parameter to fit.