2025, Dec 18 21:00

Quadratic Volterra Image Enhancement in Python: the one-line fix—compose with the mapped base

Learn why your quadratic Volterra image enhancement looks like unsharp mask and fix it: compose with the mapped base after the Teager-like filter. Python code.

Reproducing adjustable quadratic filters for image enhancement can be deceptively simple: normalize, remap intensities, run a Teager-like quadratic Volterra filter, blend, and denormalize. Yet a small composition mistake can completely cancel the effect of the nonlinear mapping and make the output look like a weak unsharp mask. Below is a concise walkthrough of the pitfall and its fix, with code you can drop in and test.

Problem

The enhancement pipeline normalizes the grayscale image to [0,1], applies an input mapping such as f_map_2 (x^2) or f_map_5 (a piecewise quadratic), filters the mapped image with a 2D Teager-like quadratic Volterra operator (formula (53) from the referenced work), and then composes the final result as a base plus scaled high-frequency component. The output should emphasize bright and dark regions based on the chosen mapping, not just thicken edges. However, the result looked nearly identical to the original, with minimal intensity-dependent enhancement.

Code that reproduces the issue

The following snippet shows the pipeline where the composition step accidentally uses the normalized image instead of the mapped one. The logic is intact, but this single line prevents the mapping from influencing the final image.

import cv2
import numpy as np

def to_unit_range(img):
    return img.astype(np.float32) / 255.0

def to_byte(img):
    return (img * 255).clip(0, 255).astype(np.uint8)

def map_input(arr, map_kind='none'):
    if map_kind == 'none':
        return arr
    elif map_kind == 'map2':
        return arr ** 2
    elif map_kind == 'map5':
        out = np.zeros_like(arr)
        m = arr > 0.5
        out[m]  = 1 - 2 * (1 - arr[m]) ** 2
        out[~m] = 2 * (arr[~m] ** 2)
        return out
    else:
        raise ValueError("bad map")

def teager2d(src):
    pad = np.pad(src, 1, mode='reflect')
    dst = np.zeros_like(src)
    for i in range(1, pad.shape[0] - 1):
        for j in range(1, pad.shape[1] - 1):
            c  = pad[i, j]
            t1 = 3 * (c ** 2)
            t2 = -0.5 * pad[i + 1, j + 1] * pad[i - 1, j - 1]
            t3 = -0.5 * pad[i + 1, j - 1] * pad[i - 1, j + 1]
            t4 = -1.0 * pad[i + 1, j] * pad[i - 1, j]
            t5 = -1.0 * pad[i, j + 1] * pad[i, j - 1]
            dst[i - 1, j - 1] = t1 + t2 + t3 + t4 + t5
    return dst

def run_enhance(path, gain_k, map_kind='none'):
    im = cv2.imread(path, 0)
    if im is None:
        raise FileNotFoundError("No image found!")
    x_unit   = to_unit_range(im)
    remapped = map_input(x_unit, map_kind)
    tq_out   = teager2d(remapped)

    # Problem: composing with x_unit drops the mapping effect
    result_unit = np.clip(x_unit + gain_k * tq_out, 0, 1)
    return to_byte(result_unit)

Why this breaks the enhancement

The core of the method is the nonlinear mapping. The mapped image is your f(x), while the normalized image is just x. If you combine x with the high-frequency component, you essentially ignore the mapping in the final result. That is exactly why the output looks like a mild edge emphasis with almost no intensity-selective boost.

The following insight captures the behavior of the filter versus the mapped base:

Note that the teager filter only enhances high frequency components of your image. It would make no strong difference in the Teager output whether you pass the mapped image or the normalized image to it. Thus, upon composing low pass and high pass, you have to use the mapped image in order to keep the applied mapping.

In other words, the Teager-like operator is doing the high-frequency work. The low-frequency base that you add it to must be the mapped signal; otherwise the mapping never reaches the viewer.

Fix and corrected code

The repair is one line: compose using the mapped image as the base. It also helps to keep file I/O separate from processing to make testing easier.

import cv2
import numpy as np

def to_unit_range(img):
    return img.astype(np.float32) / 255.0

def to_byte(img):
    return (img * 255).clip(0, 255).astype(np.uint8)

def map_input(arr, map_kind='none'):
    if map_kind == 'none':
        return arr
    elif map_kind == 'map2':
        return arr ** 2
    elif map_kind == 'map5':
        out = np.zeros_like(arr)
        m = arr > 0.5
        out[m]  = 1 - 2 * (1 - arr[m]) ** 2
        out[~m] = 2 * (arr[~m] ** 2)
        return out
    else:
        raise ValueError("bad map")

def teager2d(src):
    pad = np.pad(src, 1, mode='reflect')
    dst = np.zeros_like(src)
    for i in range(1, pad.shape[0] - 1):
        for j in range(1, pad.shape[1] - 1):
            c  = pad[i, j]
            t1 = 3 * (c ** 2)
            t2 = -0.5 * pad[i + 1, j + 1] * pad[i - 1, j - 1]
            t3 = -0.5 * pad[i + 1, j - 1] * pad[i - 1, j + 1]
            t4 = -1.0 * pad[i + 1, j] * pad[i - 1, j]
            t5 = -1.0 * pad[i, j + 1] * pad[i, j - 1]
            dst[i - 1, j - 1] = t1 + t2 + t3 + t4 + t5
    return dst

def enhance_from_array(img_gray, gain_k, map_kind='none'):
    x_unit   = to_unit_range(img_gray)
    remapped = map_input(x_unit, map_kind)
    tq_out   = teager2d(remapped)

    # Correct: compose using the mapped base
    result_unit = np.clip(remapped + gain_k * tq_out, 0, 1)
    return to_byte(result_unit)

# Example usage
# img = cv2.imread("path/to/image.png", 0)
# out = enhance_from_array(img, gain_k=0.1, map_kind='map5')
# cv2.imwrite("out.png", out)

Why it is important to get this right

Mapping is the lever that makes the enhancement content-aware. It biases the base layer toward bright or dark regions before the high-frequency boost is added. If you drop the mapped base during composition, you effectively neutralize that lever, and the result turns into a generic edge emphasis. That mismatch explains why the outputs failed to replicate the expected intensity-dependent sharpening.

Conclusion

When implementing quadratic Volterra filters with nonlinear input mappings, verify which signal you are using as the base during composition. The Teager-like component primarily contributes high frequencies; the mapped image must carry the intended intensity shaping into the final result. Keep file I/O outside processing functions to simplify testing, and confirm the behavior by inspecting the mapped base and the Teager output separately before blending. With the corrected composition, the enhancement behaves as intended.