2026, Jan 08 23:00

How to Generate Every Unique Poker Flop (52 choose 3) Efficiently in Python with itertools.combinations and Pandas

Generate all 52 choose 3 unique poker flops fast in Python—use index slices or itertools.combinations, avoid sorting and dict dedup; build a Pandas DataFrame if needed.

Generating every unique poker flop as a DataFrame sounds trivial until performance matters. A brute-force approach quickly becomes expensive if it repeatedly checks duplicates or sorts combinations. The target is clear: produce exactly 52 choose 3 equals 22100 unordered, unique triplets of cards, as fast and clean as possible.

Baseline approach that works but overdoes the work

The following version builds a deck, constructs flops with nested loops, enforces uniqueness by sorting each triplet and using a dictionary key, and finally splits the concatenated key back into three columns. It works, but it spends cycles on checks, sorting and recomposing strings that can be avoided.

import pandas as pd

deck = []
for r in ["1", "2", "3", "4", "5", "6", "7", "8", "9", "T", "J", "Q", "K"]:
    deck.append(r + "h"); deck.append(r + "c"); deck.append(r + "s"); deck.append(r + "d")

flop_map = {}
for idx, first in enumerate(deck):
    for second in deck[idx + 1:]:
        if not second == first:
            for third in deck[idx + 2:]:
                if not third == second:
                    hand_buf = sorted([first, second, third])
                    flop_map[hand_buf[0] + hand_buf[1] + hand_buf[2]] = 0

col_a, col_b, col_c = [], [], []
for key, _ in flop_map.items():
    col_a.append(key[0:2]); col_b.append(key[2:4]); col_c.append(key[4:6])

flop_df = pd.DataFrame({"a": col_a, "b": col_b, "c": col_c})

What actually slows this down

The iteration already prevents using the same card twice if the indices are sliced correctly. That means extra equality checks such as if not w == j are redundant when the slice excludes earlier elements. Sorting every triplet also hurts because the ordering can be guaranteed by how the indices are advanced. If the deck is kept in a consistent alphabetical order, the natural iteration order becomes lexicographic and there is no need to sort at all. Using a dict keyed by concatenated strings to deduplicate and then splitting those strings back into columns adds overhead for no benefit if the loops are already emitting unique combinations. Building a set to test uniqueness is useful for validation only; it is not needed in the generation path. And since the cards are strings already, wrapping elements with str() does nothing.

Last but not least, constructing the DataFrame itself costs some time. Measurements show that pure Python tuple generation is faster than building the DataFrame, and replacing the manual loops with the right iterator gives another win.

Faster generation with index slices

This version relies on index ranges to emit only strictly increasing triplets. That guarantees uniqueness and a consistent order without sort(), without a dict, and without extra equality checks.

import pandas as pd

ranks = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "T", "J", "Q", "K"]
suits = ["h", "c", "s", "d"]

deck = [r + s for r in ranks for s in suits]

first_col, second_col, third_col = [], [], []
for i, a in enumerate(deck):
    for j, b in enumerate(deck[i + 1:], i + 1):
        for c in deck[j + 1:]:
            first_col.append(a)
            second_col.append(b)
            third_col.append(c)

flop_table = pd.DataFrame({"a": first_col, "b": second_col, "c": third_col})

Keeping ranks and suits in alphabetical order yields combinations in alphabetical order. The inner loops only advance forward, so the same three cards can never appear in a different order, which removes any need for sort(), a dict, or a set to deduplicate.

Even shorter and a little faster: itertools.combinations

The standard library already provides exactly the iterator needed here. It emits unique, unordered 3-combinations directly, which can be turned into a DataFrame with minimal code. Reported timings show that this approach is a little faster than the manual nested loops, and building the combinations without a DataFrame is faster still.

import pandas as pd
import itertools

ranks = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "T", "J", "Q", "K"]
suits = ["h", "c", "s", "d"]

deck = [r + s for r in ranks for s in suits]

triples = list(itertools.combinations(deck, 3))
flop_frame = pd.DataFrame(triples, columns=["a", "b", "c"])

Empirically, this produces exactly 22100 rows and avoids all duplicate work. Without constructing the DataFrame, generating the combinations is significantly faster.

Why this matters

Small inefficiencies compound when the search space grows. Repeated sorting, redundant equality checks, and ad hoc deduplication structures make a clear and simple task slower and harder to reason about. Using index ranges or itertools.combinations removes entire classes of work and guarantees uniqueness by construction. That makes the code easier to validate and to optimize further, for example, by isolating the DataFrame creation from the combination generation if you only need to measure the core computation.

Takeaways

When generating combinations, emit unique ordered triples directly instead of generating permutations and filtering them later. Keep input in a defined order so that iteration yields lexicographically consistent output and no sort() is required. Avoid constructing intermediary concatenated strings or dictionaries if you will split values back into columns anyway. Validate uniqueness separately if you need assurance, but keep test-only structures like a set out of the hot path. And if performance really matters, prefer itertools.combinations for clarity and speed, using a DataFrame only when you actually need tabular output.