Skip to main content

pycore_tuple.h: internal tuple layout

pycore_tuple.h exposes the internal layout of immutable tuples. Unlike lists, tuples use a flexible array member so the items live immediately after the header in a single allocation. The file also declares the per-size free lists and the _PyTuple_MaybeUntrack helper used by the GC.

Map

LinesSymbolKindPurpose
1–25PyTupleObjectstructtuple header with ob_item flexible array
26–45_Py_tuple_freeliststructper-interpreter free lists (one chain per size, 0..20)
46–60PyTuple_MAXSAVESIZEmacromax size tracked by free list (20)
61–70PyTuple_MAXFREELISTmacromax depth per size chain (8)
71–80_PyTuple_MaybeUntrackfunctionremove tuple from GC tracking if all items are untracked

Reading

PyTupleObject struct

Tuples are immutable so their length is fixed at creation and the item array is part of the same heap block.

typedef struct {
PyObject_VAR_HEAD /* ob_refcnt, ob_type, ob_size */
PyObject *ob_item[1]; /* flexible array; actual length is ob_size */
} PyTupleObject;

Because the items are inline, PyTuple_GET_ITEM(op, i) is a single pointer dereference with no indirection through a separate buffer pointer.

Per-size free lists

CPython maintains one singly-linked free list per non-zero tuple size from 1 through PyTuple_MAXSAVESIZE (20). Each chain holds at most PyTuple_MAXFREELIST (8) recycled objects. The empty tuple (size 0) is a singleton stored separately in _Py_INTERP_CACHED_OBJECT(empty_tuple).

struct _Py_tuple_freelist {
/* One chain per size 1..PyTuple_MAXSAVESIZE */
PyTupleObject *items[PyTuple_MAXSAVESIZE];
int numfree[PyTuple_MAXSAVESIZE];
};

On deallocation, if ob_size is in range and the chain is not full, the tuple header is pushed onto the chain. The next allocation of the same size pops it and reinitializes ob_item in place.

_PyTuple_MaybeUntrack

After filling a tuple's items the runtime calls this to remove the tuple from the cyclic GC worklist when none of the items can participate in a cycle (for example, when all items are ints or strings). This avoids the GC visiting a large fraction of all live tuples on each collection.

static inline void
_PyTuple_MaybeUntrack(PyObject *op) {
PyTupleObject *t = (PyTupleObject *)op;
for (Py_ssize_t i = 0; i < Py_SIZE(t); i++) {
if (_PyObject_GC_MAY_BE_TRACKED(t->ob_item[i]))
return;
}
_PyObject_GC_UNTRACK(op);
}

gopy notes

objects/tuple.go stores items as a Go slice ([]Object) rather than an inline C array, so the single-allocation property does not hold. The free-list recycling is omitted since Go's GC handles short-lived tuples efficiently. _PyTuple_MaybeUntrack has no direct equivalent because gopy does not implement a tracing GC. The empty-tuple singleton is reproduced as a package-level emptyTuple variable initialized at startup.