2025, Oct 07 15:00

QuantStats Tear Sheet Breaks with Benchmark: Reproduce the Pandas Error and Fix It by Using 0.0.70 or 0.0.74

Learn why QuantStats HTML tear sheets fail when adding a benchmark (pandas truth-value error) and how to fix it fast by using versions 0.0.70 or 0.0.74.

When generating a QuantStats HTML tear sheet for a single asset, everything works. The moment a benchmark is added, some users hit a failure that stops the report with a pandas error. This guide walks through a minimal reproduction, what exactly breaks, and practical ways to resolve it using supported QuantStats versions.

Reproducing the problem

The issue appears when building an HTML report with a benchmark. The following snippet fetches daily returns for GLD and tries to compare it with SPY in a tear sheet.

import quantstats as qs
qs.extend_pandas()
asset_ret = qs.utils.download_returns('GLD')
qs.reports.html(
    asset_ret,
    benchmark="SPY",
    title='Gold vs S&P 500',
    output='reports/gld_vs_spy.html'
)

In problematic versions, running this ends with an exception similar to:

ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().

What’s going on

The traceback points to QuantStats internals invoking kelly_criterion and hitting a pandas truth-value check on a Series, which triggers the error you see. This behavior was tracked in the project’s issue tracker and identified as a code mistake. According to the report, the problem occurs in versions 0.0.71, 0.0.72, and 0.0.73. It works with 0.0.70, and it is resolved in 0.0.74.

How to fix it

There are two safe paths. The first is to downgrade QuantStats to a known-good release that doesn’t exhibit the error when using a benchmark. The second is to move to the release that contains the fix.

To use a working older version:

pip install --upgrade quantstats==0.0.70

To use the version that resolves the bug:

pip install --upgrade quantstats==0.0.74

Once the environment uses a working release, the same code will generate the HTML tear sheet as expected:

import quantstats as qs
qs.extend_pandas()
asset_ret = qs.utils.download_returns('GLD')
qs.reports.html(
    asset_ret,
    benchmark="SPY",
    title='Gold vs S&P 500',
    output='reports/gld_vs_spy.html'
)

Why this matters

Even small version changes in analytics libraries can alter behavior in subtle ways. In this case, a routine benchmark comparison tripped an internal condition and broke report generation until a fixed version was released. Awareness of version-specific behavior helps avoid wasted time on debugging code that is otherwise correct.

Takeaways

If an established workflow suddenly starts failing on a library update, check the project’s issue tracker and confirm whether the regression is known and already fixed. When reproducibility matters, pin precise versions in your environment. For this specific case, generating QuantStats reports with a benchmark works with 0.0.70 and is fixed again in 0.0.74.

The article is based on a question from StackOverflow by Dame Skytower and an answer by furas.