This parameter is the multixact equivalent of autovacuum_freeze_max_age. The mechanism is parallel; the underlying object being protected is not the transaction ID space but the MultiXact ID space, which most PostgreSQL users have never had to think about and the rest learned about during an outage. So before the parameter, the multixact tour.
A short tour of multixacts
PostgreSQL records row-level locks in the tuple itself. The xmax field, which usually holds the xid of the deleting transaction, also holds the xid of a transaction that has merely locked the row — SELECT FOR UPDATE, SELECT FOR SHARE, SELECT FOR KEY SHARE, or the implicit FOR KEY SHARE taken on behalf of a foreign key check. A flag bit on the tuple distinguishes “this row is deleted” from “this row is locked.”
That works fine when one transaction holds one lock. It does not work when several transactions hold concurrent locks on the same row, because there is only one xmax slot. Enter the multixact system.
A MultiXact ID is a 32-bit identifier that represents a set of transactions that have collectively acquired some interest in a row. When the second transaction wants to lock a row already locked by the first, PostgreSQL allocates a new MultiXactID, stores the membership list — pairs of (xid, lock mode) — in two SLRU caches under pg_multixact/, and writes the MultiXactID into the row’s xmax in place of any single xid. To check whether a row is visible or lockable, PostgreSQL now expands the MultiXactID back into its member set and consults each one.
When are multixacts created? More often than people realize:
- Foreign keys. Every row insert/update in a child table takes a
FOR KEY SHARElock on the referenced parent row. If two child-table transactions touch rows pointing at the same parent row concurrently — which happens constantly in OLTP workloads — that parent row gets a MultiXactID. - Mixed
FOR UPDATE/FOR SHAREaccess. - Row-level lock-heavy patterns — explicit pessimistic locking, queue tables, anything that fans out concurrent access to a small hot set of rows.
The two SLRU files (pg_multixact/offsets and pg_multixact/members) form the on-disk storage. The buffers in front of them are shared memory caches whose sizes used to be hard-coded and are, as of PostgreSQL 17, configurable via multixact_offsets_buffers and multixact_members_buffers. On high-throughput OLTP systems with heavy FK traffic, contention on these caches — MultiXactOffsetSLRU and MultiXactMemberSLRU waits — is a real production failure mode. I have personally diagnosed an outage with that exact root cause. It is not a theoretical concern.
MultiXact IDs, like transaction IDs, are 32-bit and wrap. They need to be frozen for the same reason. Vacuum’s job includes freezing tuples whose xmax is an old MultiXactID — replacing it with a simpler representation if all the member transactions are long since done, or removing the lock information entirely.
What this parameter does
autovacuum_multixact_freeze_max_age is the maximum age, in MultiXact IDs, that any table’s oldest unfrozen MultiXactID is allowed to reach before autovacuum is forced to launch an anti-wraparound vacuum against it. Default is 400,000,000 — twice the default for the xid version. Min 10,000; max 2,000,000,000; context postmaster, so a restart is required to change it.
The 2x default reflects the assumption that multixact consumption is much lower than xid consumption — most workloads create xids constantly and multixacts only when concurrent locking actually happens. For workloads with heavy foreign-key traffic and hot parent rows, that assumption fails badly, and multixact wraparound can become the urgent problem instead of xid wraparound. Anti-wraparound vacuums triggered by this parameter are the same beast as their xid cousins: cannot be cancelled in the usual way, run even if autovacuum = off, log entries say “to prevent wraparound.”
Tuning
For most workloads, leave it alone. For workloads where you know multixact pressure is real:
- Lower it (say, to 200M) to encourage earlier, more frequent freezing and reduce the chance of an emergency anti-wraparound vacuum colliding with peak traffic.
- Monitor
age(datminmxid)per database andage(relminmxid)per table the same way you monitorage(datfrozenxid)andage(relfrozenxid). The system catalogs expose both. Alert when either crosses 50% of its respective max-age parameter.
If you are seeing genuine multixact pressure, the fix is rarely just this GUC. Investigate FK-driven hot rows; look at pg_stat_activity wait events for MultiXactOffsetSLRU and MultiXactMemberSLRU; on PG 17+, raise the SLRU buffer GUCs. Tuning autovacuum_multixact_freeze_max_age shortens the window before an anti-wraparound vacuum runs, but it does not solve the underlying production of multixacts.
Recommendation: Leave at the default unless your workload is FK-heavy and OLTP, in which case lower it to 200M and monitor age(relminmxid) per table actively. The general advice is the same as for autovacuum_freeze_max_age: this parameter is a safety net, and your job is to never need it.