2025, Dec 16 09:00

OpenCV cv2.convexityDefects fails on float32 contours: how to fix the dtype assertion with int32 and read fixed-point depth

cv2.convexityDefects failing in OpenCV? It's a dtype mismatch: float32 contours. Use int32 instead. Learn the fix and how to interpret the fixed-point depth.

When calling OpenCV’s cv2.convexityDefects on a contour built from floating-point coordinates, you may hit a cryptic assertion error. The failure looks unrelated at first glance, yet the root cause is simple and purely about data types.

Minimal reproducible example

The following script constructs a small contour with float32 coordinates, computes the convex hull indices, and then asks for convexity defects. It fails with an assertion.

import numpy as np, cv2 as cv

pts = np.array([[0, 0], [1, 0], [1, 1], [0.5, 0.2], [0, 0]], dtype=np.float32)
h_idx = cv.convexHull(pts, returnPoints=False)
defects_out = cv.convexityDefects(pts, h_idx)

The error message reads:

cv2.error: OpenCV(4.11.0) /io/opencv/modules/imgproc/src/convhull.cpp:319: error: (-215:Assertion failed) npoints >= 0 in function 'convexityDefects'

What’s really going on

convexityDefects expects the contour’s point coordinates to be int32, not floating point. Although this expectation isn’t clearly called out in the documentation, in practice the function requires integral coordinates. With float32 input the function triggers an internal assertion and fails before producing a result.

In line with that, OpenCV contours indeed have integral coordinates. The library does not provide algorithms for sub-pixel contour detection, and historically int could be less compute expensive than float. The API also doesn’t warn about the type mismatch, which is a legitimate bug and worth a bug report. While support for floats could have been possible, as of now it isn’t.

Fix: use int32 contours

Building the contour as int32 resolves the issue immediately. The following code works as expected:

import cv2 as cv
import numpy as np

cnt_i = np.array([[0, 0], [10, 0], [10, 10], [5, 2], [0, 0]], dtype=np.int32)
h_ids = cv.convexHull(cnt_i, returnPoints=False)
d_pts = cv.convexityDefects(cnt_i, h_ids)
print(d_pts)

Output:

[[[  2   0   3 543]]]

The result is a 4-element vector: start_index, end_index, farthest_pt_index, fixpt_depth. The last value is a fixed-point number using 8 fractional bits.

fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.

Why this matters

Type mismatches like this one are easy to introduce when your pipeline naturally produces float coordinates, for example after normalization or geometric transforms. The failure mode is unhelpful, and without knowing the expectation of int32 contours you could spend time debugging the wrong thing. Understanding that OpenCV’s contour representation is integral by design helps avoid these pitfalls and makes downstream calls like cv2.convexityDefects predictable.

Takeaways

If convexityDefects throws an assertion on valid-looking input, check the dtype of your contour. Stick to int32 coordinates to keep the call stable, and remember that the returned depth is fixed-point with a 1/256 scale. Keeping these details in mind saves time and prevents confusing crashes in production image processing code.