Categories
Uncategorized

Does QGIS work with Postgres 14?

Performing a major upgrade of Postgres cluster poses a very little risk for the database itself. The binary data structure is quite stable (and because of that, a very quick upgrade using the pg_upgrade’s –link option is possible) and it’s highly unlikely that the data can be damaged or lost during the process.

But there is another risk: the application making use of data can stop working or lose some of its functionality. Why? Mostly because of changes in system catalogs and/or functions. For example, a catalog column can be dropped, or a function renamed. While PostGIS has a quite “soft” deprecation policy, and a calling deprecated function will result in a warning for a release or two, the core Postgres is rather a “move fast and break things” guy. For QGIS specifically, last serious problem was with upgrading from Postgres 11 to 12 which caused QGIS’s DB Manager to stop working (https://github.com/qgis/QGIS/issues/32321). This was also the case for ogr2ogr command-line utility.

Luckily, Postgres 14 release doesn’t contain any system catalog changes that affect QGIS, and both the “Add layer” and DB Manager interfaces work fine for QGIS versions at least as old as 3.14 “Pi”. So, if you run a spatial database with QGIS as the client app and consider upgrading: it’s safe to proceed with the upgrade.

Categories
Uncategorized

Broken indexes after Linux upgrade

Upgrading one’s operating system to new major version is an important system maintenance task and it’s usually a good thing. It brings new features, security improvements, access to newer packages and so on. Sometimes it doesn’t go that smoothly, for example the updated system will refuse to start. But upgrading the OS running a Postgres cluster and involving a glibc library version update, or migrating a database to another machine running another OS (and glibc) version poses a little known, but very significant risk…

The disease

The technical reasons behind the problem are very well explained in this post by Citus. Long story short, changes in glibc code can make the operating system text sorting order inconsistent with the previous version, and in turn with already created B-Tree indexes. The possible damage is limited to indexes on textual data (VARCHAR, TEXT), the numbers (integers, floats, NUMERICs…) and of course booleans are immune.

The breaking change in glibc was introduced in Debian 10 (Stretch) , Ubuntu 18.10 (cosmic) and 20.04 LTS (focal), and RedHat 8. Source here.

Symptomes and diagnosis

Index corruption can cause a variety of bad things, all of them can be described as “weird behavior” or “Postgres doesn’t to such things!”. Examples include:

  • write transactions (INSERT, UPDATE, or ALTER) on affected table causing unexpected crashes or restarts, with “segmentation fault” message in server logs,
  • write transactions failing due to “ERROR: posting list tuple with X items cannot be split at offset”
  • pgAgent jobs failing due to “std::__ios failure’ what(): basic_filebuf::_M_convert_to_external conversion error: iostream error
  • duplicate key values despite UNIQUE constraint in place
  • SELECT queries not returning rows that indeed exist in a table and fulfill the WHERE clause

If those things happen to you, an you have recently upgraded your OS, switched over to a replica that was running a newer OS version, or migrated the database using streaming replication to another machine running a newer OS version – you may be a victim of index corruption.

The cure

First, check the system logs for any signs of hardware failure, which can be the cause of index corruption too. Running a REINDEX on a failing disk won’t make things better for you, but it certainly will make them worse. If the hardware is sane, you can assume the problem is just with indexes.

If the database is small-ish, your best bet is to take the system offline, and either dump-and-restore the database contents or reindex everything with reindexdb. On the other hand, if that process can take too long, you’ll need something more precise. The excellent amcheck extension is there for you to find the broken indexes that need reindexing. First, deploy it with:

CREATE EXTENSION amcheck;

Then you can check the indexes one-by-one by querying like this:

SELECT bt_index_check(oid) FROM pg_class WHERE relname='product_name_idx';
-- no output is good! if there is something wrong, the query will throw ERROR instead.

or automate the process with this code snippet from Gitlab.

After you have found the broken indexes, run:

REINDEX INDEX product_name_idx;

It can fail or cause Postgres to restart unexpectedly. In those cases, try to find another solution: DROP INDEX an CREATE INDEX, set the indinsvalid to FALSE in pg_index system catalog for the problematic index and retry, acquire a manual ACCESS EXCLUSIVE lock on a table and retry… as a last resort, dump the table’s contents to a file with COPY, drop the table and recreate it from scratch.

Prevention

The best thing to do is to avoid in-place OS upgrades when the upgrade involves any changes to the collation support. The same thing applies to migrating databases using streaming replication between machines with different OSes. Migrate the database logically, using dump-and-restore or logical replication. If that’s not possible, remember to REINDEX the database before bringing the application back online or opening database access to users. This will save you a lot of headache…

Categories
Uncategorized

What happens to logical replication after running pg_upgrade?

Even if logical replication is a method of major upgrade itself, sometimes you’ll want to mix and match them. Consider the following scenario: a big and busy database that has a logical replica, which has its specific tables and indexes. When it’s time to upgrade, you will probably want to upgrade both instances – the publisher and the subscriber. As the database is big an busy, using logical replication itself or dump-and-restore aren’t the best choices – those methods will require massive amount of storage, and dump-and-restore will also require a long period of downtime. So, the most viable option is to use the pg_upgrade tool, capable of upgrading a multi-terabyte cluster in the matter of minutes with –link option. But…

pg_upgrade will destroy all replication slots

After successful pg_upgrade run, the new cluster will have no replication slots. That means if you allow your users and applications to connect to the upgraded clusters immediately, any data changes won’t be replicated to subscribers. So, it’s very important to ensure no data is written to the publisher database until logical replication is up and running.

Things to do when upgrading a publisher-subscriber environment

So, how to correctly run pg_upgrade without losing any data changes? Those steps worked for me:

  1. Block all traffic to the primary database from apps and users – using firewall settings, the “host all all 0.0.0.0/0 reject” rule in pg_hba.conf file, or both. Allow only replication connections from the subscriber(s).
  2. Check if there is any replication lag, and when it reaches zero, run DROP SUBSCRIPTION on the subscriber. This will drop the replication slot on publisher – but it would be removed by pg_upgrade anyway.
  3. Upgrade the subscriber cluster.
  4. Upgrade the publisher cluster.
  5. On the subscriber, CREATE SUBSCRIPTION … WITH (copy_data=FALSE)
  6. Check if the logical replication works as expected, for example add a bogus row to some replicated table, check for its existence on subscriber and delete it afterwards.
  7. Recreate any other logical replication slots (for example for Debezium), if required.
  8. Unlock the write traffic.
  9. Profit!

While it’s possible to detach a subscription from the replication slot (using ALTER SUBSCRIPTION … SET (slot_name=NONE)), recreate the slot, and attach the subscription to the slot again – in my test environment it resulted in the subscriber database not accepting new changes, so I finally decided to drop and recreate the subscription. This worked without any problems both in test and production.

Categories
Uncategorized

Does PostGIS 3.1 mean the end of TopologyException?

One of least liked things about PostGIS is that spatial analysis (like union, intersection, difference) can fail due to participating geometries that don’t comply with OGC validity rules. The new version of GEOS, the engine that powers PostGIS geoprocessing functions, provided a rewritten “OverlayNG” set of functions that promised a better robustness of geoprocessing. Finally, GEOS 3.9 and PostGIS 3.1 were released, and I decided to test those new versions to check if TopologyException is really gone.

Installation

As for now (January 2021), the PostGIS in pgdg repository is still based on GEOS 3.7, so it can’t use the OverlayNG features. If you don’t want to compile PostGIS from source, for Ubuntu you can use the ubuntugis-unstable repo. The PostGIS 3.1 and GEOS 3.9 packages are provided for Ubuntu 20.04 (Focal) only.

On the other hand, PostgreSQL 13 is not mandatory: you can pair the newest PostGIS with Postgres 12 as well.

Data

I decided to test the new PostGIS with following Polish public datasets known to contain invalid geometries:

and intersect them with OpenStreetMap data downloaded from Geofabrik website, and imported using osm2pgsql.

Test

I ran a “sum of lines within polygons” queries against the data sets. The nature conservation data was first:

SELECT sum(st_length(st_intersection(a.geom, b.way))), a.nazwa FROM uzytki_ekologiczne a, planet_osm_line b WHERE st_intersects(a.geom, b.way) GROUP BY a.nazwa;

And it returned an error:

ERROR:  lwgeom_intersection_prec: GEOS Error: TopologyException: Input geom 0 is invalid: Self-intersection at 473688.49685412901 307009.51074238349

So, maybe reducing the precision (intoduced in PostGIS 3.1) will help?

SELECT sum(st_length(st_intersection(a.geom, b.way, 0.1))), a.nazwa FROM uzytki_ekologiczne a, planet_osm_line b WHERE st_intersects(a.geom, b.way) GROUP BY a.nazwa;

Nope – it just changed one error message to another:

ERROR:  GEOSIntersects: TopologyException: side location conflict at 506560.04918080079 266363.2467461843

Further reducing the precision caused the same error to appear.

Maybe the cadastral dataset will perform better?

SELECT a."identyfikator_działki", sum(st_length(st_intersection(a.wkb_geometry, b.way))) FROM cadastre a, planet_osm_line b WHERE st_intersects(a.wkb_geometry, b.way) GROUP BY a."identyfikator_działki";

Now even the more robust ST_Intersects function fails:

ERROR:  GEOSIntersects: TopologyException: side location conflict at 506560.04918080079 266363.2467461843

Conclusion

As you can see, the new GEOS’ OverlayNG can have better performance, but still it can’t stand self-intersecting polygons (the “Ring Self-Intersection” error is fine, but so it was with GEOS 3.7). This means that you probably shouldn’t hurry to update PostGIS and GEOS if there are no packages for your OS yet, and you definitely can’t dream of smoothly running spatial analysis on all kinds of real world data after the upgrade. And this also means that the infamous GIS magic wand, the zero-width buffer, isn’t going anywhere… but efforts are made to make its magic more predictable.

Categories
Uncategorized

Setting up PostgreSQL for Debezium

Edit: In the original post, I made a mistake about Debezium versions: it should be 1.2.2 to 1.3.0, not 2.0 to 3.0. And we have experienced a production incident with 1.2.1 so my recommendation is – go for 1.3.1, not any other version (as for November 2020).

Debezium is a popular Open Source change data capture (CDC) solution. It’s based on a set of well-established components (Kafka, Kafka Connect, ZooKeeper) and used successfully in production by companies like Dehlivery or BlaBlaCar. However, the setup is not as straightforward as one may think, and you may encounter some obstacles. In this post, I will share some tips originating from hooking up Debezium to a quite big and busy OLTP Postgres database.

Originally, Debezium was connected to a Postgres DB using a wal2json or decoderbufs plugins. Don’t use them – as native logical replication became available in Postgres 10, a better option called “pgoutput” was introduced in Debezium. From the database perspective, the Debezium connector will be treated just like another logical replication subscriber, and usual rules of logical replication will apply.

Check wal_level

Logical decoding – on which logical replication and Debezium depend on – requires wal_level configuration parameter to be set to “logical”, and its change requires Postgres restart. Check this beforehand and schedule a database restart if necessary.

Pick right Debezium version

NEVER, EVER use Debezium versions between 1.2.2 and 1.3.0. Those versions come with a bug which is hard to understand for anoyne who isn’t a Postgres expert, but its consequences are simple and dangerous : using such buggy Debezium version will take the master database down sooner or later. This is because Debezium will consume data changes, but won’t confirm the consumption to the database server. The server will retain all WAL segments since replication slot creation, and you will eventually run out of disk space (or money, if you use some pay-as-you-go storage like Amazon EFS or keep adding new disks).

Check database logs

When Debezium fails to connect, it returns a “Could not obtain encoding for database …” error message. Usually it has NOTHING to do with encoding, it just means that it couldn’t connect. Check the Postgres log for details – it might be a wrong password or no pg_hba.conf rule. If the database log is clean, look for network issues.

Create slot and publication manually

While Debezium is capable of creating required database entities (logical replication slot and publication) itself, it’s a bad idea. First – it requires superuser privileges for Debezium user. Second – it will create publication for all tables, what can have unpleasant side effects – for example it will silently block all write queries to tables that don’t have a primary key. Third – you may run into an “publication does not exist” error even if the publication is indeed created correctly, see here.

My advice is – create a publication and a replication slot manually before setting up a connector. Beware though – creation order matters! You have to create publication before you create replication slot, otherwise you will see “publication does not exist” error. The correct order is:

CREATE PUBLICATION debezium FOR TABLE <your table list here>;
SELECT pg_create_logical_replication_slot('debezium', 'pgoutput');

You are free to change the table list after that at any time using ALTER PUBLICATION.

On the Debezium side, set:

"publication.name":"debezium",
"slot.name":"debezium"

in your JSON config. Of course the “debezium” name isn’t mandatory, you can be more specific – this is just an example.

Also, especially for high-volume databases, it’s recommended to use the Avro format for serialization:

"key.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter":"io.confluent.connect.avro.AvroConverter"

Monitor closely

As with any replication slot, if the subscriber doesn’t consume changes, WAL segments will start to pile up. You have to monitor if the Debezium is up, if the replication lag isn’t too big, and if the database server has some free disk space. Most of the time, if Debezium goes down, a simple restart will suffice, but you have to know when to do it. Debezium is much more crash-prone than Postgres, so a monitoring solution is a must-have.

If you look into pg_stat_replication system view in Postgres, here are some differences between Debezium and regular logical replication:

  • The replay lag (or difference between pg_current_wal_lsn and replay_lsn) can become quite big and drop suddenly in one minute intervals. This is because Debezium commits change consumption every minute.
  • The reply_time in pg_stat_replication is in the past – in my case it was 1999-12-21 03:26:12 – and it appears to be weird, but normal.
Categories
Uncategorized

Solving the “3rd arg isn’t within [0,1]” problem in pgRouting

Happy PostGIS Day everyone!

(for those who don’t know: the third Wednesday of November each year is celebrated as the “GIS Day“, and the-day-after-GIS Day, is, well, the PostGIS Day!)

One of most annoying things about PostGIS is that sometimes spatial analysis functions fail due to GEOS TopologyException. This happens because of one of participating geometries doesn’t conform to validity rules. One of most common validity error is a self-intersection – a “hourglass polygon”. It’s a well known issue and has well-known, but someteimes weird (zero-sized buffer, I’m looking at you!) solutions. But the documentation states that LINESTRINGs are always valid, so there won’t be problems when working with linear geometries, right?

Wrong.

Imagine that you have some network (road, rail, electric, water…) and want to perform some network analysis on it. The way to go is to use the pgRouting extension, which is built on top on PostGIS. To convert a set of LINESTRINGs to a network topology suitable for analysis, you will want to use the pgr_nodeNetwork function first. And then you may encounter an error…

ERROR:  line_interpolate_point: 3rd arg isn't within [0,1]
 CONTEXT:  SQL statement "insert INTO public.my_network_noded (old_id,sub_id,geom) (  with cut_locations AS (
            SELECT l1id AS lid, locus
            FROM inter_loc
            -- then generates start AND end locus for each line that have to be cut buy a location point
            UNION ALL
            SELECT i.l1id  AS lid, 0 AS locus
            FROM inter_loc i left join public.my_network b on (i.l1id = b.id)
            UNION ALL
            SELECT i.l1id  AS lid, 1 AS locus
            FROM inter_loc i left join public.my_network b on (i.l1id = b.id)
            ORDER BY lid, locus
        ),
        -- we generate a row_number index column for each input line
        -- to be able to self-join the table to cut a line between two consecutive locations
        loc_with_idx AS (
            SELECT lid, locus, row_number() over (partition by lid ORDER BY locus) AS idx
            FROM cut_locations
        )
        -- finally, each original line is cut with consecutive locations using linear referencing functions
        SELECT l.id, loc1.idx AS sub_id, st_linesubstring(l.geom, loc1.locus, loc2.locus) AS geom
        FROM loc_with_idx loc1 join loc_with_idx loc2 using (lid) join public.my_network l on (l.id = loc1.lid)
        WHERE loc2.idx = loc1.idx+1
            -- keeps only linestring geometries
            AND geometryType(st_linesubstring(l.geom, loc1.locus, loc2.locus)) = 'LINESTRING') "
 PL/pgSQL function pgr_nodenetwork(text,double precision,text,text,text,text,boolean) line 229 at EXECUTE

But why? Well, linestrings are always valid, but not always simple. A common example of non-simple linestring is a closed ring – a roundabout junction, for example. The another one is a self-intersecting line. Closed rings are OK for pgRouting, but self-intersecting lines are not:

self-intersecting line
An example of self-intersecting line causing pgRouting to fail.

So, if you ever encounter the “line_interpolate_point: 3rd arg isn’t within [0,1] ” error when trying to node a network of linestrings in pgRouting, check your geometries using the ST_IsSimple function:

SELECT * FROM my_network WHERE ST_IsSimple(geom) = FALSE;

If the self-intersections are perfectly valid from business standpoint, you will have to split that lines into smaller pieces for pgRouting purposes.

Categories
Uncategorized

Data consistency issues in logical replication

In contrast to physical (streaming) replication, write access on subscriber node is not disabled. This means that DML and DDL statements like INSERT, UPDATE, DELETE or ALTER can be executed on subscriber. As the replication is unidirectional, however, these changes won’t be replicated back to publisher. This behavior can lead to problems – the replication can stop with all its consequences or the data can become out of sync. In this post, I’d like to share some tips how to deal with such problems.

Scenario 1: data INSERTed on subscriber side

Data can be inserted into a replication target table without any problems, provided that a row with identical REPLICA IDENTITY wont’ be subsequently inserted on publisher node. For example, consider the following table:

CREATE TABLE mydata(id integer primary key, value integer, description varchar);

its REPLICA IDENTITY will be using the “id” primary key, which is the default.

You can safely execute the following INSERT on subscriber:

INSERT INTO mydata VALUES (100, 1024, 'test');

…but if you execute an INSERT with identical primary key value on the publisher,

INSERT INTO mydata VALUES (100, 65536, 'test on publisher');

you will get an error on subscriber node:

ERROR: duplicate key value violates unique constraint "mydata_pkey"
DETAIL: Key (id) = (100) already exists.

The replication will stop, and WAL segments will start to pile on publisher. Luckily, the log contains all the required information: you have to DELETE the problematic row on the subscriber:

DELETE FROM mydata WHERE id=100;

and the replication will restart automatically.

Scenario 2: data UPDATEd on subscriber side

As long as required privileges are granted, UPDATE statements can be executed on subscriber without any error or warning. The replication will not be halted. The publisher doesn’t know about the change, and as the result, the values in the same row can differ between publisher and subscriber. But if the same row gets updated on publisher, the whole row will be overwritten on subscriber, and the change will be gone – even if the changes were made to different columns.

Scenario 3: data DELETEd on subscriber side

This kind of inconsistency also won’t cause replication to stop, and if your log_min_messages configuration parameter is set to lower value than “debug1”, chances are that you won’t notice it. If you do log debug messages and update/delete a row on publisher which doesn’t exist on subscriber, you should expect following log entries on the subscriber node:

DEBUG: logical replication did not find row for update in replication target relation "mydata"
DEBUG: logical replication did not find row for delete in replication target relation "mydata"

Sometimes keeping smaller dataset on subscriber is desired and such messages can be safely ignored. But what if it’s an inconsistency and has to be resolved? Unfortunately, native logical replication – unlike pglogical – doesn’t provide tools to resynchronize the table. But there is a workaround…

How to resynchronize a table in PostgreSQL logical replication?

Using officially documented tools only, you can either:

  1. Lock the table for writes on publisher, dump the data and copy this dump to subscriber, truncate table on subscriber, restore a data dump on subscriber and remove the lock on publisher.
  2. Exclude the table from current publication, truncate table on subscriber, and create a new publication-subscription pair.

But if you don’t mind editing the system catalogs by hand, there is another possibility…

WARNING: black magic ahead. Proceed at your own risk. I’m not resposible for any data loss or corruption resulting from following this advice. Always have a backup in case things go wrong. You have been warned.

The state of logically replicated tables is kept in a pg_subscription_rel table. The “srsubstate” column indicates if the table is being initialized, data is being copied, synchronized or if the replication is ready. When you change its value to ‘i’, meaning “initial”, you can force the initial sync to be started. So, in order to perform a resync a table called “mydata”, execute following statements on subscriber:

TRUNCATE TABLE mydata;
UPDATE pg_subscription_rel SET srsubstate = 'i' WHERE srrelid = (SELECT relid FROM pg_stat_user_tables WHERE schemaname='public' AND relname='mydata');

(if the table is in another schema, replace “public” with relevant schema name)

After a while, check the pg_subscription_rel table – the state should automatically change to ‘d’, next to ‘s’ and finally ‘r’, meaning that the resync is complete. If the state hangs at ‘d’, check the logs and pg_stat_replication at publisher node – if the replication fails due to duplicate primary key value, purge the data on subscriber, check if the row count is actually zero, and repeat the pg_subscription_rel update.

Categories
Uncategorized

Scaling row level security to group roles

Row level security is a great Postgres feature that allows to grant privileges to selected rows only, without having to create additional data structures. The common setup is to add a column with users’ names, and a policy that compares this column value with CURRENT_USER:

CREATE TABLE issues(id integer generated always as identity primary key, title varchar, description varchar, status varchar, assignee varchar);

ALTER TABLE issues ENABLE ROW LEVEL SECURITY;

CREATE POLICY issue_rls ON issues FOR ALL TO public USING (assignee = CURRENT_USER);

And then, any user that isn’t a superuser or hasn’t the BYPASSRLS privilege will see (and be able to edit) only rows with relevant “assignee” value. And it works well in an environment with only handful of users. But as the number of users grow, it can become difficult to maintain: sometimes we’ll want to assign some rows to multiple users, or a whole group of users. What to do then?

Enter the pg_has_role function. It checks if a particular user has given role. Its usage is quite simple:

SELECT pg_has_role('chinook', 'service', 'USAGE');

the first argument is the user name, the second is a role name, and the third is a bit tricky: ‘MEMBER’ determines if user is an actual member of role or can execute SET ROLE to become a member, while ‘USAGE’ checks for actual membership only. As users are in fact specific case of roles with LOGIN privilege (for very old Postgres versions – 8.0 and earlier – this isn’t true, but those versions don’t have row level security anyway) , this function works equally well for individual users and group roles.

So, we’ll modify the policy as such:

DROP POLICY issue_rls;
CREATE POLICY issue_rls ON issues FOR ALL TO public USING (pg_has_role(current_user, assignee, 'USAGE'));

…and now the rows can be assigned to individual users or whole roles.

Categories
Uncategorized

Pitfalls and quirks of logical replication in Postgres 12

Logical replication, in a nutshell, is a process of “replaying” data changes in another database. The first attempts to implement replication in Postgres – Slony, Bucardo – were logical replication solutions. In contrast to binary (streaming) replication, it offers greater flexibility: data can be replicated selectively – only relevant databases and tables, the replica server remains a fully functional instance that can have its own users, security rules and non-replicated databases, and in some cases the performance is better.

However, the logical replication is not as foolproof as binary replication, and without proper care it can lead to a primary server crash. I’d like to share some thoughts after setting up a logical replication in a large-ish (one terabyte) production database.

Do not proceed without monitoring

Seriously, if you don’t have a robust monitoring solution that will warn you against system abnormalities – especially running out of disk space – set it up and test before any attempt to implement logical replication. This is because logical replication can break silently without any SQL exceptions, cause the WAL files to pile up, fill the entire disk and bring the primary server down. At least two things have to be monitored:

  • disk usage,
  • errors in Postgres log.

Only with such early warning system, you will be able to fix any issues before they cause an outage.

Replicate to the same server? Possible, but…

Logical replication can be used to replicate data between databases in a single Postgres cluster. It’s a perfectly valid setup, but it requires special treatment: you have to create logical replication slot first, and with the slot already in place, create a subscription pointing to that slot. If you try to set up replication in a default way – with automatic slot creation – the CREATE SUBSCRIPTION command will hang. No errors or warnings – just a hung statement.

Be careful with PUBLICATION FOR ALL TABLES

Like all previous logical replication solutions, native replication doesn’t replicate data structure changes. If any DDL command is issued (CREATE, ALTER, DROP) it must be executed manually both on primary and replica(s). The FOR ALL TABLES modifier of a CREATE PUBLICATION statement doesn’t change this limitation. Instead, it will cause the replication to stop as soon as you (or your application, or a Postgres extension) issue a CREATE TABLE statement. Also, FOR ALL TABLES will automatically includes any tables created by extensions (like spatial_ref_sys from PostGIS), and tables that don’t have a primary key or replica identity – both cases are problematic.

Spatial is special

The spatial_ref_sys table is a part of PostGIS extension, and it’s populated by CREATE EXTENSION postgis; statement. More often than not, it shouldn’t be replicated, as every PostGIS instance populates it itself. If you have to replicate it (for example, you work with coordinate systems that aren’t part of EPSG registry), remember to TRUNCATE the spatial_ref_sys table on replica before creating subscription.

Review the primary keys carefully

A table eligible for logical replication must have a primary key constraint or replica identity – that’s the rule. It’s nothing new as it was the same with previous logical replication solutions, but its enforcement in native Postgres replication is at least weird. You are allowed to add a table without PK or replica identity to a publication, it won’t cause any error, but… it will block any write activity to it!

You will need to add a PK as soon as possible (or if you can’t afford an exclusive lock for unique index creation time, replica identity full will be just fine, but less performant) to unlock write access.

After adding a new table, refresh publication

It’s not enough to execute “ALTER PUBLICATION … ADD TABLE …” on primary server in order to add a new table to replication. You have to log in into a replica database and execute the following statement:

ALTER SUBSCRIPTION … REFRESH PUBLICATION;

Summary

Logical replication is great and has many use cases: separating transactional and analitical workload, aggregating changes from multiple databases, and so on. It is not as simple as it looks at first glance. Follow the rules and use it with caution, and you should enjoy a fast, flexible data replication.

Categories
Uncategorized

Batch operations made easy with psql’s /gexec

Any DBA will sometimes have to execute some kind of batch DDL / DCL operation, for example:

  • grant privileges for all tables in all schemas to a specific user,
  • create a spatial index for all GEOMETRY columns that doesn’t have such index,
  • add a list of tables to logical replication,
  • alter all INTEGER columns to BIGINT,
  • … and many more.

If the database structure is simple with only a handful of tables, this can be done manually without much effort. But what if there are dozens of schemas, hundreds of tables and thousands of columns? Possible automation solutions include writing a script in a scripting language of choice (Python, Perl…) or a pl/pgsql procedure. But there’s one more possibility: use one of lesser known commandline psql client superpowers – the \gexec command.

How it works? It takes the query output and executes it as a SQL statement. For example, to grant SELECT privileges for all tables in all schemas to “chinook” user, you will want to execute the following command:

SELECT 'GRANT SELECT ON ALL TABLES IN SCHEMA ' || quote_ident(schema_name) || ' TO chinook;' FROM information_schema.schemata; \gexec

psql will output the result – a set of “GRANT SELECT ON ALL TABLES IN SCHEMA …” statements – and execute them one by one. The use of quote_ident function around object name isn’t technically necessary, but helpful – if any name requires double quotes, the database engine will add them automatically.

Personally, I like this method very much because it’s a huge time saver, and doesn’t require any boilerplate code.