63.4. Implementation
This section covers B-Tree index implementation details that may be
of use to advanced users. See
src/backend/access/nbtree/README
in the source
distribution for a much more detailed, internals-focused description
of the B-Tree implementation.
63.4.1. B-Tree Structure
PostgreSQL B-Tree indexes are multi-level tree structures, where each level of the tree can be used as a doubly-linked list of pages. A single metapage is stored in a fixed position at the start of the first segment file of the index. All other pages are either leaf pages or internal pages. Leaf pages are the pages on the lowest level of the tree. All other levels consist of internal pages. Each leaf page contains tuples that point to table rows. Each internal page contains tuples that point to the next level down in the tree. Typically, over 99% of all pages are leaf pages. Both internal pages and leaf pages use the standard page format described in Section 69.6 .
New leaf pages are added to a B-Tree index when an existing leaf page cannot fit an incoming tuple. A page split operation makes room for items that originally belonged on the overflowing page by moving a portion of the items to a new page. Page splits must also insert a new downlink to the new page in the parent page, which may cause the parent to split in turn. Page splits " cascade upwards " in a recursive fashion. When the root page finally cannot fit a new downlink, a root page split operation takes place. This adds a new level to the tree structure by creating a new root page that is one level above the original root page.
63.4.2. Deduplication
A duplicate is a leaf page tuple (a tuple that points to a table row) where all indexed key columns have values that match corresponding column values from at least one other leaf page tuple in the same index. Duplicate tuples are quite common in practice. B-Tree indexes can use a special, space-efficient representation for duplicates when an optional technique is enabled: deduplication .
Deduplication works by periodically merging groups of duplicate tuples together, forming a single posting list tuple for each group. The column key value(s) only appear once in this representation. This is followed by a sorted array of TID s that point to rows in the table. This significantly reduces the storage size of indexes where each value (or each distinct combination of column values) appears several times on average. The latency of queries can be reduced significantly. Overall query throughput may increase significantly. The overhead of routine index vacuuming may also be reduced significantly.
Note
B-Tree deduplication is just as effective with
"
duplicates
"
that contain a NULL value, even though
NULL values are never equal to each other according to the
=
member of any B-Tree operator class. As far
as any part of the implementation that understands the on-disk
B-Tree structure is concerned, NULL is just another value from the
domain of indexed values.
The deduplication process occurs lazily, when a new item is inserted that cannot fit on an existing leaf page. This prevents (or at least delays) leaf page splits. Unlike GIN posting list tuples, B-Tree posting list tuples do not need to expand every time a new duplicate is inserted; they are merely an alternative physical representation of the original logical contents of the leaf page. This design prioritizes consistent performance with mixed read-write workloads. Most client applications will at least see a moderate performance benefit from using deduplication. Deduplication is enabled by default.
CREATE INDEX
and
REINDEX
apply deduplication to create posting list tuples, though the
strategy they use is slightly different. Each group of duplicate
ordinary tuples encountered in the sorted input taken from the
table is merged into a posting list tuple
before
being added to the current pending leaf
page. Individual posting list tuples are packed with as many
TID
s as possible. Leaf pages are written out in
the usual way, without any separate deduplication pass. This
strategy is well-suited to
CREATE INDEX
and
REINDEX
because they are once-off batch
operations.
Write-heavy workloads that don't benefit from deduplication due to
having few or no duplicate values in indexes will incur a small,
fixed performance penalty (unless deduplication is explicitly
disabled). The
deduplicate_items
storage
parameter can be used to disable deduplication within individual
indexes. There is never any performance penalty with read-only
workloads, since reading posting list tuples is at least as
efficient as reading the standard tuple representation. Disabling
deduplication isn't usually helpful.
B-Tree indexes are not directly aware that under MVCC, there might
be multiple extant versions of the same logical table row; to an
index, each tuple is an independent object that needs its own index
entry.
"
Version duplicates
"
may sometimes accumulate
and adversely affect query latency and throughput. This typically
occurs with
UPDATE
-heavy workloads where most
individual updates cannot apply the
HOT
optimization
(often because at least one indexed column gets
modified, necessitating a new set of index tuple versions -
one new tuple for
each and every
index). In
effect, B-Tree deduplication ameliorates index bloat caused by
version churn. Note that even the tuples from a unique index are
not necessarily
physically
unique when stored
on disk due to version churn. The deduplication optimization is
selectively applied within unique indexes. It targets those pages
that appear to have version duplicates. The high level goal is to
give
VACUUM
more time to run before an
"
unnecessary
"
page split caused by version churn can
take place.
Tip
A special heuristic is applied to determine whether a
deduplication pass in a unique index should take place. It can
often skip straight to splitting a leaf page, avoiding a
performance penalty from wasting cycles on unhelpful deduplication
passes. If you're concerned about the overhead of deduplication,
consider setting
deduplicate_items = off
selectively. Leaving deduplication enabled in unique indexes has
little downside.
Deduplication cannot be used in all cases due to
implementation-level restrictions. Deduplication safety is
determined when
CREATE INDEX
or
REINDEX
is run.
Note that deduplication is deemed unsafe and cannot be used in the following cases involving semantically significant differences among equal datums:
-
text
,varchar
, andchar
cannot use deduplication when a nondeterministic collation is used. Case and accent differences must be preserved among equal datums. -
numeric
cannot use deduplication. Numeric display scale must be preserved among equal datums. -
jsonb
cannot use deduplication, since thejsonb
B-Tree operator class usesnumeric
internally. -
float4
andfloat8
cannot use deduplication. These types have distinct representations for-0
and0
, which are nevertheless considered equal. This difference must be preserved.
There is one further implementation-level restriction that may be lifted in a future version of PostgreSQL :
-
Container types (such as composite types, arrays, or range types) cannot use deduplication.
There is one further implementation-level restriction that applies regardless of the operator class or collation used:
-
INCLUDE
indexes can never use deduplication.