Skip to main content

Include/tupleobject.h: tuple object header

CPython exposes tuple objects through two headers. Include/tupleobject.h holds the public stable API. Include/cpython/tupleobject.h adds the internal struct layout and the unsafe indexing macros that the compiler uses when emitting BUILD_TUPLE.

Map

LinesSymbolKindNotes
1-10PyTuple_Typeexterntype object singleton
11-15PyTuple_Check / PyTuple_CheckExactmacrotype-check helpers
18-22PyTuple_Newfunctionallocates with pre-zeroed ob_item
23-27PyTuple_Sizefunctionsafe size query, checks type
28-35PyTuple_GetItemfunctionbounds-checked item fetch
36-42PyTuple_SetItemfunctionsteals ref, only valid before publish
44-50PyTuple_GetSlicefunctionreturns new tuple sub-range
55-62PyTuple_GET_ITEMunsafe macrodirect ob_item index, no bounds check
63-68PyTuple_SET_ITEMunsafe macrodirect ob_item write, no refcount
69-74PyTuple_GET_SIZEunsafe macroreads ob_size without type check
75-80_PyTuple_MaybeUntrackinternalGC untrack if all items are untracked

Reading

Forward declaration and struct layout

Include/cpython/tupleobject.h exposes the concrete struct:

typedef struct {
PyObject_VAR_HEAD
PyObject *ob_item[1];
} PyTupleObject;

ob_item is a flexible array allocated inline with the object. PyTuple_New(n) calls PyObject_GC_NewVar with size n, so the entire item array lives in one allocation. This is the reason tuples are immutable: resizing would require reallocation and would invalidate every cached pointer.

Unsafe macros (cpython/tupleobject.h)

#define PyTuple_GET_ITEM(op, i) (((PyTupleObject *)(op))->ob_item[i])
#define PyTuple_SET_ITEM(op, i, v) (((PyTupleObject *)(op))->ob_item[i] = (v))
#define PyTuple_GET_SIZE(op) (assert(PyTuple_Check(op)), Py_SIZE(op))

These macros exist for performance in hot paths. The compiler emits PyTuple_SET_ITEM loops after BUILD_TUPLE because at that point ownership is unambiguous: the tuple is brand-new and not yet visible to any other object. Calling PyTuple_SetItem (the safe version) would perform a redundant Py_XDECREF on the old slot, which is always NULL at construction time.

BUILD_TUPLE and pre-allocation

The BUILD_TUPLE(count) bytecode calls PyTuple_New(count), then fills slots top-to-bottom from the value stack using PyTuple_SET_ITEM. No reference-count manipulation is needed during the fill because the stack pops transfer ownership directly into ob_item. The tuple is only GC-tracked after all slots are written, avoiding a window where the GC could see partially initialized items.

gopy notes

  • objects/tuple.go mirrors the inline-allocation strategy using a Go slice backing the item array. TupleNew pre-allocates the slice to the exact count so appends never reallocate.
  • PyTuple_GET_ITEM / PyTuple_SET_ITEM map to direct slice index operations in generated bytecode handlers. No bounds check is suppressed; Go's bounds checker covers the gap that CPython leaves to the caller.
  • _PyTuple_MaybeUntrack has no equivalent yet. GC integration is deferred to a later milestone.