Include/tupleobject.h: tuple object header
CPython exposes tuple objects through two headers. Include/tupleobject.h holds the public stable API. Include/cpython/tupleobject.h adds the internal struct layout and the unsafe indexing macros that the compiler uses when emitting BUILD_TUPLE.
Map
| Lines | Symbol | Kind | Notes |
|---|---|---|---|
| 1-10 | PyTuple_Type | extern | type object singleton |
| 11-15 | PyTuple_Check / PyTuple_CheckExact | macro | type-check helpers |
| 18-22 | PyTuple_New | function | allocates with pre-zeroed ob_item |
| 23-27 | PyTuple_Size | function | safe size query, checks type |
| 28-35 | PyTuple_GetItem | function | bounds-checked item fetch |
| 36-42 | PyTuple_SetItem | function | steals ref, only valid before publish |
| 44-50 | PyTuple_GetSlice | function | returns new tuple sub-range |
| 55-62 | PyTuple_GET_ITEM | unsafe macro | direct ob_item index, no bounds check |
| 63-68 | PyTuple_SET_ITEM | unsafe macro | direct ob_item write, no refcount |
| 69-74 | PyTuple_GET_SIZE | unsafe macro | reads ob_size without type check |
| 75-80 | _PyTuple_MaybeUntrack | internal | GC untrack if all items are untracked |
Reading
Forward declaration and struct layout
Include/cpython/tupleobject.h exposes the concrete struct:
typedef struct {
PyObject_VAR_HEAD
PyObject *ob_item[1];
} PyTupleObject;
ob_item is a flexible array allocated inline with the object. PyTuple_New(n) calls PyObject_GC_NewVar with size n, so the entire item array lives in one allocation. This is the reason tuples are immutable: resizing would require reallocation and would invalidate every cached pointer.
Unsafe macros (cpython/tupleobject.h)
#define PyTuple_GET_ITEM(op, i) (((PyTupleObject *)(op))->ob_item[i])
#define PyTuple_SET_ITEM(op, i, v) (((PyTupleObject *)(op))->ob_item[i] = (v))
#define PyTuple_GET_SIZE(op) (assert(PyTuple_Check(op)), Py_SIZE(op))
These macros exist for performance in hot paths. The compiler emits PyTuple_SET_ITEM loops after BUILD_TUPLE because at that point ownership is unambiguous: the tuple is brand-new and not yet visible to any other object. Calling PyTuple_SetItem (the safe version) would perform a redundant Py_XDECREF on the old slot, which is always NULL at construction time.
BUILD_TUPLE and pre-allocation
The BUILD_TUPLE(count) bytecode calls PyTuple_New(count), then fills slots top-to-bottom from the value stack using PyTuple_SET_ITEM. No reference-count manipulation is needed during the fill because the stack pops transfer ownership directly into ob_item. The tuple is only GC-tracked after all slots are written, avoiding a window where the GC could see partially initialized items.
gopy notes
objects/tuple.gomirrors the inline-allocation strategy using a Go slice backing the item array.TupleNewpre-allocates the slice to the exact count so appends never reallocate.PyTuple_GET_ITEM/PyTuple_SET_ITEMmap to direct slice index operations in generated bytecode handlers. No bounds check is suppressed; Go's bounds checker covers the gap that CPython leaves to the caller._PyTuple_MaybeUntrackhas no equivalent yet. GC integration is deferred to a later milestone.