Upgrading one’s operating system to new major version is an important system maintenance task and it’s usually a good thing. It brings new features, security improvements, access to newer packages and so on. Sometimes it doesn’t go that smoothly, for example the updated system will refuse to start. But upgrading the OS running a Postgres cluster and involving a glibc library version update, or migrating a database to another machine running another OS (and glibc) version poses a little known, but very significant risk…
The disease
The technical reasons behind the problem are very well explained in this post by Citus. Long story short, changes in glibc code can make the operating system text sorting order inconsistent with the previous version, and in turn with already created B-Tree indexes. The possible damage is limited to indexes on textual data (VARCHAR, TEXT), the numbers (integers, floats, NUMERICs…) and of course booleans are immune.
The breaking change in glibc was introduced in Debian 10 (Stretch) , Ubuntu 18.10 (cosmic) and 20.04 LTS (focal), and RedHat 8. Source here.
Symptomes and diagnosis
Index corruption can cause a variety of bad things, all of them can be described as “weird behavior” or “Postgres doesn’t to such things!”. Examples include:
- write transactions (INSERT, UPDATE, or ALTER) on affected table causing unexpected crashes or restarts, with “segmentation fault” message in server logs,
- write transactions failing due to “ERROR: posting list tuple with X items cannot be split at offset”
- pgAgent jobs failing due to “std::__ios failure’ what(): basic_filebuf::_M_convert_to_external conversion error: iostream error
- duplicate key values despite UNIQUE constraint in place
- SELECT queries not returning rows that indeed exist in a table and fulfill the WHERE clause
If those things happen to you, an you have recently upgraded your OS, switched over to a replica that was running a newer OS version, or migrated the database using streaming replication to another machine running a newer OS version – you may be a victim of index corruption.
The cure
First, check the system logs for any signs of hardware failure, which can be the cause of index corruption too. Running a REINDEX on a failing disk won’t make things better for you, but it certainly will make them worse. If the hardware is sane, you can assume the problem is just with indexes.
If the database is small-ish, your best bet is to take the system offline, and either dump-and-restore the database contents or reindex everything with reindexdb. On the other hand, if that process can take too long, you’ll need something more precise. The excellent amcheck extension is there for you to find the broken indexes that need reindexing. First, deploy it with:
CREATE EXTENSION amcheck;
Then you can check the indexes one-by-one by querying like this:
SELECT bt_index_check(oid) FROM pg_class WHERE relname='product_name_idx'; -- no output is good! if there is something wrong, the query will throw ERROR instead.
or automate the process with this code snippet from Gitlab.
After you have found the broken indexes, run:
REINDEX INDEX product_name_idx;
It can fail or cause Postgres to restart unexpectedly. In those cases, try to find another solution: DROP INDEX an CREATE INDEX, set the indinsvalid to FALSE in pg_index system catalog for the problematic index and retry, acquire a manual ACCESS EXCLUSIVE lock on a table and retry… as a last resort, dump the table’s contents to a file with COPY, drop the table and recreate it from scratch.
Prevention
The best thing to do is to avoid in-place OS upgrades when the upgrade involves any changes to the collation support. The same thing applies to migrating databases using streaming replication between machines with different OSes. Migrate the database logically, using dump-and-restore or logical replication. If that’s not possible, remember to REINDEX the database before bringing the application back online or opening database access to users. This will save you a lot of headache…