Latest Blog Posts

Two Hundred and Twelve Things
Posted by Christophe Pettus in pgExperts on 2026-05-01 at 15:00
PostgreSQL 19 is an admin-and-monitoring release with 212 items: worker-managed AIO, smarter planner joins, faster diagnostics, and a C11 requirement.

pgxbackup: Continuity Support for pgBackRest
Posted by Christophe Pettus in pgExperts on 2026-05-01 at 13:00
PGX is stepping in to maintain pgBackRest as pgxbackup, ensuring critical fixes and PostgreSQL compatibility for the industry-standard backup tool.

It Depends: Using Session Variables in Postgres
Posted by Shaun Thomas in pgEdge on 2026-05-01 at 05:36

There's been a kind of persistent myth regarding Postgres since I first started using it seriously over 20 years ago: "Postgres doesn't support user variables." This hasn't really been true since version 8.0 way back in 2005. Part of this stems from the fact it doesn't do things the same way as other common database engines.Why don't we spend a little time exploring the functionality that time forgot?

What Everyone Else Is Doing

Before I delve into the Postgres approach, let's take a look at the competition. If anyone wants to switch to Postgres (as they should), they'll bring along plenty of assumptions.Let's start with MySQL, the formerly undisputed database king of the LAMP stack. MySQL session variables merely prefix any name with  to assign a value:Simple, right? It's even possible to use them directly in queries:We don't have to get into the finer minutiae here, as the MySQL documentation on user-defined variables does that job splendidly. The point is that some users expect this level of compatibility and balk when it's missing.When it comes to SQL Server, things are very similar to MySQL, though perhaps a bit more structured:Once again, the SQL Server documentation on variables is pretty clear about how these work. The primary caveat here is that these are limited to the current batch, making them somewhat tedious to work with in some cases.The picture for Oracle is a bit different. Oracle calls them substitution variables, and prefixes using  rather than : This is also closer to a macro system than a true variable; the SQL*Plus or SQLcl clients substitute the values prior to sending statements to the server. It's not something other drivers or clients can use unless they added it themselves for compatibility purposes.

Postgres Has Entered the Chat

So where does Postgres fit into all of this?If Oracle's  substitution is what you're accustomed to, Postgres actually has a direct equivalent. The psql client supports  for defining client-side variables: The  tool has supported these practically sinc[...]

PG DATA 2026. The talks I am most excited about. Part 2
Posted by Henrietta Dombrovskaya on 2026-05-01 at 01:17

Continuing my review of the upcoming program for PG DATA 2026, started here.

I will start with Umair’s talk, Securing Multi‑Tenant Databases with Row‑Level Security & Open‑Source Auditing. I always await Umair’s talks with great anticipation because his vision of Postgres is closely aligned with mine, and the topics he discusses are usually the ones I am most interested in. This talk interests me a lot, because the topic of multi-tenancy is the one of my favorite, but Umair’s approach is absolutely the opposite of the path I take, and I wish I had time to discuss and ask the questions afterward! Umair would have to come to Chicago one more time to present at Prairie PUG!

In parallel, Radim Marek will present Visualizing PostgreSQL Storage Internals. I met Radim at DevDays Prague earlier this year, when he talked about regression testing of SQL queries, and to be honest, I was sad that our CfP committee didn’t choose that talk, because it was brilliant. I am sure he will deliver an equally brilliant talk in Chicago (I love what I read in the talk description); it will just be a different one :).

What I really like about this year’s program is that we have many presentations about real-life experiences, about complicated projects that succeded, and lessons learned. On Thursday, we will have a number of such talks. including What I Learned Using PostgreSQL In Real Products by Hajira Sultana and Building and scaling Managed Postgres from 0 to 1000s by Sam Wilson.

Steve Zelasnik is a long-time active Prairie Postgres PUG member (and formerly Chicago PUG member), and he previously presented his work at PG Day Chicago. In his The Multi-Terabyte Trust Exercise: Validating Massive Postgres Migrations presentation, he will share yet another real-life experience and lessons learned. And I am sure, nobody would want to miss Graph-Like Analytics in PostgreSQL for Payer Behavior and Underpayment Detection by Nida Fatima – aren’t we all the victims of this system?!

If you ever participated in one of my op

[...]

All Your GUCs in a Row: authentication_timeout
Posted by Christophe Pettus in pgExperts on 2026-05-01 at 01:00
A connection is not free just because it has not logged in yet. From the moment the TCP handshake completes, the would-be client is holding a backend slot counted against max_connections, and it will hold that slot until one of two things happens: it finishes the authentication protocol, or authe…

On pgvectorscale, and Hybrid Search Without an Elasticsearch Sidecar
Posted by Christophe Pettus in pgExperts on 2026-04-30 at 22:12
pgvector is excellent. It is also, at large scale, expensive — because the HNSW index it gives you wants to live in memory to be fast, and “wants to live in memory” stops being a casual statement somewhere around fifty million 1536-dimensional embeddings. At which point you reach for …

Posette 2026
Posted by Paolo Melchiorre in ITPUG on 2026-04-30 at 22:00

An Event for Postgres (pronounced /Pō-zet/, and formerly called Citus Con) is a free and virtual developer event. The name POSETTE stands for Postgres Open Source Ecosystem Talks Training & Education.

The Calm Platform Test: Is Your PostgreSQL Strategy Enterprise-Ready?
Posted by Vibhor Kumar on 2026-04-30 at 20:37

Features create capability. Calm operations create trust.

Most platform failures do not begin because one feature is missing. They usually begin when teams become afraid to change the systems that run the business.

They become cautious about upgrades, nervous about failover, uncertain about performance changes, and hesitant to touch architecture that has become too important to disturb and too fragile to evolve. Over time, the platform may still function, but it no longer feels safe to improve. That is when technology stops being an accelerator and quietly becomes a constraint.

This is why I believe enterprise PostgreSQL strategy needs a better question.

The question is not only:

Can PostgreSQL support this workload?

In many cases, the answer is already yes.

The more important question is:

Can we operate, evolve, govern, and scale this PostgreSQL platform without creating organizational anxiety?

That is the real enterprise test.

PostgreSQL has become far more than an open source relational database. It is now a serious foundation for transactional systems, cloud-native applications, data platform modernization, analytics-adjacent workloads, extensibility, AI-ready applications, vector search, and governed enterprise architectures. PostgreSQL 18 continues this direction with improvements such as asynchronous I/O, retained optimizer statistics during pg_upgrade, skip scan support for multicolumn B-tree indexes, and uuidv7() for timestamp-ordered UUIDs. These are not just feature bullets. They are signals that PostgreSQL continues to mature as an operational platform, not merely as a database engine.

But features alone do not make a platform enterprise-ready.

Enterprise readiness is not proven in a demo. It is proven during change. It is proven during maintenance windows, failover events, audits, upgrades, performance reviews, scaling pressure, and the uncomfortable Monday morning conversation after something did not behave the way everyone expe

[...]

New Presentation
Posted by Bruce Momjian in EDB on 2026-04-30 at 18:45

I just gave a new presentation at PGDay Armenia titled Building an MCP Server Using Postgres. The talk is a follow-up to my Databases in the AI Trenches talk, and explores how MCP allows functionality beyond LLMs and RAG alone. It includes MCP demos of a radiation detector and pretzel bakery.

PHP Goes BSD
Posted by Christophe Pettus in pgExperts on 2026-04-30 at 16:42
The php.internals vote closed on April 4, and PHP 9.0 will ship under the 3-clause BSD license. The RFC, driven by Ben Ramsey, replaces both the PHP License v3.01 and the Zend Engine License v2.0 with a single, OSI-recognized, FSF-recognized, GPL-compatible permissive license that has been sittin…

After pgBackRest
Posted by Christophe Pettus in pgExperts on 2026-04-30 at 15:00
pgBackRest is now unmaintained. If you were running pgBackRest in production — and a lot of people were running pgBackRest in production — what do you actually do now? The honest answer has three parts. First: the world has not ended. pgBackRest still works. The git repository still exists, the b…

Why sell the idea of contributing to PostgreSQL to your employer
Posted by Valeria Kaplan in Data Egret on 2026-04-30 at 11:31

How contribution decisions shape the sustainability of the PostgreSQL ecosystem

Last week at PGConf.DE, I gave a talk titled Not Just Altruism: Selling PostgreSQL Contributions to Your Employer.

It didn’t attract a large audience. To be fair, it was right after lunch on the last day of the conference, and there was strong competition from two excellent technical talks. Plus… why bother talking to your employer about contributing to PostgreSQL? :)

The news this week that David Steele, the core maintainer of pgBackRest, one of the most popular backup tools in Postgres ecosystem, is stepping down, highlights a critical point: without ongoing support of contributors from companies in the PostgreSQL ecosystem, and community stewardship of PostgreSQL tooling, the sustainability of both the tools and PostgreSQL itself is at risk.

So what was my talk about?

In short, it was about the motivations, passion, individual sacrifice, and courage of current contributors; the lack of understanding many companies have when it comes to contributing to PostgreSQL (yes, those same companies that rely on PostgreSQL and its ecosystem for their products, services, and revenue); and the minuscule number of contributors compared to PostgreSQL’s ever-growing user base.

A few stats

PostgreSQL has repeatedly been named “DBMS of the Year” by DB-Engines. According to the Stack Overflow survey of nearly 500,000 developers, more than 58% of professional developers have done extensive development work in PostgreSQL over the past year.

So how many people actually contribute to PostgreSQL?

Here’s a rough calculation:

Additionally, there are many individuals who self-report their contributions e.g. those who contribute occasionally or in the short term, volunteer at events, organise meet-ups, review code, members of various Postgres committees and entities, speak about Postgres at non-PG conferences etc. They perform valuable work but are not yet formally recognised.

In the State of PostgreSQL surv

[...]

The best PostgreSQL databases are boring on purpose
Posted by Umair Shahid in Stormatics on 2026-04-30 at 10:09

Boring is an investment. Exciting is a bill.

The calmest PostgreSQL deployments in production share one trait. They are boring. Pages stay quiet. Dashboards stay green. The on-call engineer reads a book on Tuesday night. And the people running those databases will tell you, plainly, that boring is the achievement.

Think about flying for a minute. The flight everyone wants is the one where the captain says hello, the meal shows up on time, and a few hours later, the wheels touch down in the right city. That flight is boring. It is also a small miracle. Behind that boring flight sits decades of compounded discipline. Pilots with thousands of simulator hours. Mechanics with checklists that they have run a hundred times. Air traffic controllers, weather systems, redundant hydraulics, and post-incident reviews that the entire industry reads and learns from. The passenger experiences calm. Everyone else earns it.

A production database deserves the same lens.

What it takes to keep flights boring

A boring flight is the visible tip of an enormous iceberg of effort. Certifications get renewed. Engines get inspected on a schedule. Manuals get updated the moment something new is learned. Trainees fly with senior captains for years before they sit alone in the left seat. When something does go wrong somewhere in the world, the report becomes required reading for every operator in the industry.

This is the part most people forget when they admire how safe air travel has become. Safety is a continuous investment. The moment an airline starts cutting corners on the boring parts, the flights stop being boring.

Apply the same lens to PostgreSQL

Once you accept that boring is the goal, the operational picture rearranges itself.

A boring PostgreSQL deployment is one where autovacuum is tuned to the workload, replication lag stays inside a known band, backups get restored on a

[...]

Open source doesn’t die. It gets unfunded.
Posted by Jan Wieremjewicz in Percona on 2026-04-30 at 08:00

If you are using PostgreSQL in any capacity very likely this week has started for you with a bang. pgBackRest, one of the most known tools for PostgreSQL, praised for the scalable and reliable way to do backups has announced that the project is currently archived.

Archived, you mean EOL?

blog/2026/04/opensourcedoesntdie-reddit.png

No! Open source software rarely has a hard “end of life.” What it does have are maintainership gaps and those can be just as serious.

Open source doesn’t die. It gets unfunded.
Posted by Jan Wieremjewicz in Percona on 2026-04-30 at 07:45

If you are using PostgreSQL in any capacity very likely this week has started for you with a bang. pgBackRest, one of the most known tools for PostgreSQL, praised for the scalable and reliable way to do backups has announced that the project is currently archived.

Archived, you mean EOL?

blog/2026/04/opensourcedoesntdie-reddit.png

No! Open source software rarely has a hard “end of life.” What it does have are maintainership gaps and those can be just as serious.

Volatile Queries and Semantic Caching: How to Make Sure It Always Returns the Right Answer
Posted by Muhammad Aqeel in pgEdge on 2026-04-30 at 05:47

Part 3 of the Semantic Caching in PostgreSQL series. Part 1 covers the fundamentals of  — how it stores query embeddings, runs cosine similarity searches via pgvector, and returns cached LLM results without a round-trip to your model provider. Part 2 goes deeper into production operations: cache tags, eviction policies, monitoring, and Python integration patterns. This post focuses on a specific class of queries that need to be handled differently, and where that handling belongs.A well-tuned semantic cache can deliver 60–80% fewer LLM API calls and matching cost savings. But those numbers depend on caching the right queries. Cache everything and you risk returning answers that were accurate once but are no longer true — and returning them confidently, with no indication that anything is wrong. Understanding the line between cacheable and non-cacheable queries, and owning that line in the right layer of your stack, is what separates a semantic cache that saves money from one that quietly misleads users.

The Two Kinds of Queries Your Cache Will See

Every query that arrives at your application falls into one of two buckets.Time-invariant queries have answers that do not depend on when they are asked. "What is the boiling point of water?" is the same answer today as it was last year and will be next year. "Explain how TCP/IP works." "What does idempotent mean?" Semantic caching is a natural fit for these — one LLM call populates the entry, and every paraphrase that follows is a free hit.Volatile queries have answers that are bound to the moment they are asked. Their correct response changes with time, live state, or the specific user asking:The defining property of volatile queries is that they produce stable embeddings but changing answers. Ask "What is the current time?" at 14:00 and again at 14:05 and the two vectors have cosine similarity of 1.0 — the same sentence, the same semantics, an identical embedding. But one correct answer is  and the other is . A cache that stores the first call and serves it fo[...]

SCALE 23x Vlog: PostgreSQL in Southern California
Posted by Pavlo Golub in Cybertec on 2026-04-30 at 05:00

I flew 12 hours to Pasadena, survived the sunshine, and came back with a vlog (or pavlog? 🙂) Worth it? Absolutely.

SCALE 23x is one of those events where the hallway conversations are as valuable as the talks. I grabbed my camera and tried to capture some of that energy. Featuring Bruce Momjian, Elizabeth Christensen, Mark Wong, and Gabrielle Roth — people who have been building this community for years.

This is Episode 1 of my conference vlog series. Go watch it. 👇

 

More episodes coming soon. Stay tuned! 🐘

 

The post SCALE 23x Vlog: PostgreSQL in Southern California appeared first on CYBERTEC PostgreSQL | Services & Support.

Troubleshooting logical replication delay made easy
Posted by Jobin Augustine in Percona on 2026-04-30 at 04:57

This blog is based on a real production case in which users experienced a serious delay in logical replication. Let me try to explain how to approach similar cases and analyze them in an easy method, because lag in logical replication is a common problem, and we should expect it to come up for different environments. But sometimes troubleshooting can be challenging, especially on DBaaS environments where we won’t get in-depth information at OS / hardware level. Such situations force us to deal with limited information which is available within the PostgreSQL connection (No host-level troubleshooting possible)

The Case

The case that triggered this blog was an attempt to migrate from one cloud vendor, to a recent version of PostgreSQL on a DBaaS offering of another cloud vendor. They started observing huge replication lag and reported to Percona. As usual, we started with pg_gather data collection.

(At Percona, we use pg_gather for diagnosis. Even though this blog and diagnosis refers to pg_gather report, any good diagnosis tool / scripts which can help to study the wait-event pattern and lag details could be able to help)

We saw upto 4.5 terabyte lag is happening at the transmission side (Publisher) on the customer case. The “Transmission”  lag” is the difference between the latest generated LSN and the LSN which the WAL Sender is able to send (sent_lsn of pg_stat_replication). That’s a first indication that the problem is mainly at the publisher side (WAL Sender) and it is not able to send the information fast enough.

Next step of investigation is to understand what both those WAL Senders might be doing. The wait event information for each WAL Sender could provide a clear clue on where the delay is happening.

Both the WAL Senders are mainly waiting in “WalSenderWriteData” event upto 85% of its time. This is a very unusual level of wait.

Following is the logic behind this.

  1. Logical decoding hands a finished record to WalSndWriteData()
  2. The data is queued in the libpq
[...]

All Your GUCs in a Row: array_nulls
Posted by Christophe Pettus in pgExperts on 2026-04-30 at 01:00
We leave the archive arc behind and enter the first of several backward-compatibility GUCs. array_nulls controls whether the array input parser treats an unquoted NULL as an actual SQL null or as the four-character string "NULL". Default is on; context is user; it has been on by default since Pos…

A very illuminating article about contributing to Open Source and PostgreSQL
Posted by Luca Ferrari on 2026-04-30 at 00:00

An interesting article about PostgreSQL and Open Source.

A very illuminating article about contributing to Open Source and PostgreSQL

Abdelrhman Sersawy wrote an article titled How I started contributing to PostgreSQL in which he describes how he started to write code for the community.

I think the article is very good and detailed, as well as illuminating. It reminds me how I felt the very first times I was pushing commits to the open source ecosystem (not only to PostgreSQL).

pgagroal 2.1.0 is out!
Posted by Luca Ferrari on 2026-04-30 at 00:00

A new release of the fast connection pooler for PostgreSQL!

pgagroal 2.1.0 is out!

Yesterday the version 2.1.0 of pgagroal has been released! This is a feature release, and the full changelog is available here.

This release provides a lot of new features, most notably:

  • a web console to monitor Prometheus metrics;
  • an improved failover support;
  • an health check process;
  • improve pgagroal-cli ping command output to provide information about the health of the servers;
  • improve the test suite;
  • a configuration generator;
  • a lot of small improvements and a few bug fixing.

Glancing at a few of the above, the improved failover now allows users to define a script that will run after a succesfull failover, so that it is possible to notify all the standbys of the failover. The idea is that, using this notification script, standbys can be reconfigured on the fly to follow a new promoted primary.

The health check process is a new worker process that can be started automatically in background and that will periodically query the configured primary serve to see if it is active. Similarly, it will check the health of the standbys (if any) to ensure they are up and running. The health status (running or down) of the servers is now included also in the pgagroal-cli ping command, so that it can quickly tell the users if both the pooler and the backends are alive.

The startup validation is a way to query the pg_control_system() special function to ensure that the standbys are at the same major version of the primary and warns or even prevent the pooler to start.

The test suite has gone a very important refactoring and improvement, and the coverage has grown providing more confidence in code changes.

A new command, pgagroal-config has been added to the project. This allows users to create a main pgagroal.conf configuration from scratch in an interactive way. Moreover, the same tool can be used to query and modify the configuration file in an automated and repeteable w

[...]

AIO Grows Up
Posted by Christophe Pettus in pgExperts on 2026-04-29 at 21:00
PostgreSQL 18 shipped asynchronous I/O. PostgreSQL 19, currently in feature freeze and headed for a September release, makes it tolerable to operate. That sounds like a snide reading. It is not. The AIO subsystem in PG18 was a serious piece of engineering, and on the workloads it covers — sequent…

PostgreSQL, Timezones, and DBeaver
Posted by Dave Stokes on 2026-04-29 at 18:40

Time zones are an unfortunately complex subject when dealing with PostgreSQL. You may be running your local time zone on your on-premises server or on your own laptop. Or you may be using the time zone of your server’s physical location. And you may have set all your servers to UTC. And all are valid approaches, depending on your circumstances.

DBeaver users know it is a very advanced tool for database work. But it is easy to get into time zone issues, as the default time zone for your session is taken from your client machine. But this can be adjusted.

UTC?

UTC or Universal Time Coordinated is a time zone standard used as a basis for all time zones worldwide. It is a constant time scale and does not change for Daylight Saving Time (DST). The benefits include streamlined cross-region data synchronization, easier debugging, accurate time-based transaction ordering, and scalability for global applications. In distributed systems, storing data in UTC ensures that logs and transactions are ordered correctly, regardless of which geographic region recorded the action. There are many more reasons to use UTC, which will be ignored for brevity.


PostgreSQL knows how to adjust for situations like where you are in a North American time zone, and the cluster of servers is in EMEA. Most client programs, when sending a timestamp, get it converted by the PostgreSQL server. But sometimes that does not happen, and you need to make adjustments.


Checking Your Time Zone


You can check your time zone in PostgreSQL with the SHOW timezone; command. In my case, I am in Texas, which is in the America/Chicago time zone.


The SHOW timezone; command and its output

















You can see the time zone offset at the end of th

[...]

REPACK Moves In
Posted by Christophe Pettus in pgExperts on 2026-04-29 at 18:34
For about fifteen years, the standard answer to “this table is bloated, what do I actually do about it” has been one of the out-of-tree options: pg_repack (the extension), pg_squeeze (Antonin Houska’s predecessor work), or a hand-rolled CREATE TABLE AS and swap. PG19 changes tha…

PostgresEDI April 2026 Meetup Recap & May Lightning Talks
Posted by Jimmy Angelakos on 2026-04-29 at 11:00

Another great evening for the PostgresEDI community! 🐘

First off, a massive thank you to everyone who came out to our April meetup. The discussions were brilliant, and it's amazing to see new faces come to experience the friendly environment at our meetups.

PostgresEDI April 2026 Meetup 1

PostgresEDI April 2026 Meetup 2

Hugo Tunius on stage at PostgresEDI April 2026 Meetup Hugo Tunius presenting at the April meetup.

We had a fantastic technical dive this month, with Hugo Tunius taking the stage to talk about plid — a custom ULID-inspired ID type with prefix support that fits in 128 bits, which he built as a Postgres extension using Rust and pgrx. You can check out his complete slides on GitHub if you want to dig into the technical details!

After that, I (Jimmy Angelakos) stepped up to do a bit of live coding on stage, walking through some common mistakes and showing how to fix bad SQL queries in Postgres.

It is always so great to see the energy in the room, with people sticking around to chat, ask questions, and share their own database stories.

What's Next? Lightning Talks! ⚡

For our next meetup in May, we are mixing things up with a Lightning Talks format!

We already have a great lineup taking shape:

  • I tried TimescaleDB for weather dataRiver MacLeod
  • ROLLUPs and CUBEsJim Gardner
  • LISTEN Carefully: How NOTIFY Can Trip Up Your DatabaseJimmy Angelakos

We Want You to Speak!

We're going to have lightning talks, and we'd love for you to submit! If you are a first-time speaker, even better—please let us know, and we will absolutely put you on the programme. Lightning talks are the perfect, low-pressure way to dip your toes into public speaking.

Even better, we encourage you to use our meetup as a springboard or a dry run to test your talk before submitting it to a major conference. Speaking of which, the Call for Papers (CFP) for PGDay UK 2026 closes on May 12th! The conference takes place in London on September 8th, so this is the perfect opportunity to practice your pitch in front of a friendly local crowd.

Join Us!

As always, t

[...]

All Your GUCs in a Row: archive_timeout
Posted by Christophe Pettus in pgExperts on 2026-04-29 at 01:00
The archiver only runs when a WAL segment is complete. On a busy database that happens constantly; on a quiet one it might not happen for hours or days. archive_timeout exists to prevent the resulting “our database has been accepting writes all afternoon but none of them are in the archive …

PG DATA 2026: The talks I am most excited about. Part 1
Posted by Henrietta Dombrovskaya on 2026-04-29 at 00:40

Hello everyone, here comes a series of my annual posts about the Chicago Postgres Conference, and what I am most excited about. And I want to start with the training sessions we offer. All three training sessions are presented by my favorite people, and I can’t wait to tell you more about them!

The first training will be hosted by Andy Atkinson, who not only understands how to make the most of PostgreSQL + Ruby on Rails, but is also passionate about teaching others :). Andy authored a book, “High Performance PostgreSQL for Rails,” which I reviewed when it was still in the making, and which I highly recommend to anyone to whom PostgreSQL + Ruby on Rails rings a bell. As a database architect, being on the receiving end of what mindless ORM usage can produce, I hope that people will use this opportunity to learn the best practices and dos and don’ts

If you attended last year’s PG DATA Students’ Day, you might have attended the “Database Design” training hosted by Lætitia Avrot. I remember that last year, when this training was offered for the first time, some attendees were unsure whether they might need it in real life. However, those who attended shared that this session was a true eye-opener, and it looks like they have shared their excitement with others: this year, this is the most requested training so far!

Lætitia is an amazing teacher and one of the most knowledgeable people in the Postgres world, and this training is probably the best of her productions 🙂

And finally, the last but not least, Shaun Thomas will deliver “DBA in the box” training. If you want to learn “what’s inside,” if you have always been curious about DBAs “black magic” – that’s your chance to become one of those magicians. Shaun starts from the basics, so I can assure you you won’t feel “unqualified.”

All training sessions will run from 9 AM to 12 PM, and after lunch, we will have an official conference opening and a keynote by Robert Haas. How many favorite people can one person have? Well, I have many, but if you ev

[...]

PostgreSQL Ecosystem Problems
Posted by Stefanie Janine on 2026-04-28 at 22:00

PgBackRest Is Dead

Yesterday the maintainer of PgBackRest, David Steele, published the NOTICE OF OBSOLESCENCE.

For further information please read the blog post pgBackRest is dead. Now what? by Lætitia Avrot. She also points to how to go an as she, like me, always recommended PgBackRest for PostgreSQL backups.

Therefore a big thank you goes for the work David has done on PgBackRest.

And Now What?

Things alike happened before, for example when due to the liquidation of the Segfault Inc. Multicorn became an abandend project.

That has been solved by several people creating a fork and named that fork Multicorn2.

I predict that very soon several forks of PgBackRest will be spotted in the wild with different names. And that might become a problem.
It might end up with different patches solving problems and they would not be consistent. In additon, which fork would become the replacement the RPM or DEB packages?

This would also not solve the problem, that a very good maintainer of an essential part of the PostgreSQL ecosystem does not get paid for the work he’s done. And keep in mind, that this is not a small job that one could do as a side project. He, like all of us, needs to make a living.

Even having a another company sponsoring him would only be a short-time solution. What when that company get bought, or a new CEO decides to spare the money to invest it elsewhere?

Some people of the PostgreSQL community already thought, that it might be a good idea to move PgBackRest into PostgreSQL itself.
But that might also be a short-time solution, in additon to all the arguments speaking against this way like, the code differs a lot to PostgreSQL standards.

What about other widely used projects in the PostgreSQL ecosystem that are widely used, and extenesions, that do have a lot of users?

An Umbrella For the Ecosystem?

IMHO an umbrella organisation for tools in the PostgreSQL ecosystem would be a good solution. No single company owning the code. Also switching maintai

[...]

Managed Postgres, Examined: Amazon RDS for PostgreSQL
Posted by Christophe Pettus in pgExperts on 2026-04-28 at 21:00
First in a series of dispassionate surveys of the major managed-Postgres offerings. This post is about Amazon RDS for PostgreSQL — what AWS calls “traditional RDS,” as distinct from Aurora PostgreSQL, which is a separate product with a separate architecture and will get its own post. …

HOT Updates in Postgres
Posted by Radim Marek on 2026-04-28 at 20:23

In the previous article we watched every UPDATE leave dead tuple behind. The same copy-on-write behaviour shows up from the operational angle in DELETEs are difficult. That's the tradeoff of MVCC and on the heap alone it's tolerable. The problem is the indexes.

Every UPDATE in PostgreSQL potentially writes to every index on the table, even when the indexed columns didn't change. Five indexes, one updated column? Five extra index writes, five new entries to vacuum, five times the WAL traffic. At thousands of updates per second this becomes the dominant cost of running a write-heavy table.

Heap-Only Tuple (HOT) updates are PostgreSQL's escape hatch from this problem. They are, in my opinion, the single cleverest optimization in the storage engine. Let's trace exactly how they work.

Cost of a normal UPDATE

Without HOT, index maintenance scales poorly. Here's a table with multiple indexes:

pageinspect ships with the contrib modules and is available on most installations. It exposes raw page contents, useful for understanding storage, but never expose it to application users.
CREATE EXTENSION IF NOT EXISTS pageinspect;

CREATE TABLE hot_demo (
    id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    name text NOT NULL,
    status text NOT NULL DEFAULT 'active',
    score numeric(10,2)
);

CREATE INDEX idx_hot_name ON hot_demo (name);
CREATE INDEX idx_hot_status ON hot_demo (status);

INSERT INTO hot_demo (name, status, score) VALUES
    ('alice', 'active', 95.00),
    ('bob',   'active', 82.50),
    ('carol', 'active', 77.25);

Updating the indexed name column requires substantial background work.

UPDATE hot_demo SET name = 'BOB' WHERE id = 2;

PostgreSQL has to:

  1. Set t_xmax on the old tuple to the current transaction ID, marking it dead
  2. Create a new tuple
  3. Insert a primary key index entry pointing to the new tuple's ctid
  4. Insert a new entry into idx_hot_name pointing to the new tuple's ctid
  5. Insert a new entry into idx_hot_status pointing to the new tuple's ctid
[...]

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.