Performing a major upgrade of Postgres cluster poses a very little risk for the database itself. The binary data structure is quite stable (and because of that, a very quick upgrade using the pg_upgrade’s –link option is possible) and it’s highly unlikely that the data can be damaged or lost during the process.
But there is another risk: the application making use of data can stop working or lose some of its functionality. Why? Mostly because of changes in system catalogs and/or functions. For example, a catalog column can be dropped, or a function renamed. While PostGIS has a quite “soft” deprecation policy, and a calling deprecated function will result in a warning for a release or two, the core Postgres is rather a “move fast and break things” guy. For QGIS specifically, last serious problem was with upgrading from Postgres 11 to 12 which caused QGIS’s DB Manager to stop working (https://github.com/qgis/QGIS/issues/32321). This was also the case for ogr2ogr command-line utility.
Luckily, Postgres 14 release doesn’t contain any system catalog changes that affect QGIS, and both the “Add layer” and DB Manager interfaces work fine for QGIS versions at least as old as 3.14 “Pi”. So, if you run a spatial database with QGIS as the client app and consider upgrading: it’s safe to proceed with the upgrade.