repmgr node check — performs some health checks on a node from a replication perspective
Performs some health checks on a node from a replication perspective. This command must be run on the local node.
Currently repmgr performs health checks on physical replication slots only, with the aim of warning about streaming replication standbys which have become detached and the associated risk of uncontrolled WAL file growth.
Execution on the primary server:
$ repmgr -f /etc/repmgr.conf node check Node "node1": Server role: OK (node is primary) Replication lag: OK (N/A - node is primary) WAL archiving: OK (0 pending files) Upstream connection: OK (N/A - is primary) Downstream servers: OK (2 of 2 downstream nodes attached) Replication slots: OK (node has no physical replication slots) Missing replication slots: OK (node has no missing physical replication slots) Configured data directory: OK (configured "data_directory" is "/var/lib/postgresql/data")
Execution on a standby server:
$ repmgr -f /etc/repmgr.conf node check Node "node2": Server role: OK (node is standby) Replication lag: OK (0 seconds) WAL archiving: OK (0 pending archive ready files) Upstream connection: OK (node "node2" (ID: 2) is attached to expected upstream node "node1" (ID: 1)) Downstream servers: OK (this node has no downstream nodes) Replication slots: OK (node has no physical replication slots) Missing physical replication slots: OK (node has no missing physical replication slots) Configured data directory: OK (configured "data_directory" is "/var/lib/postgresql/data")
Each check can be performed individually by supplying an additional command line parameter, e.g.:
$ repmgr node check --role OK (node is primary)
Parameters for individual checks are as follows:
--role
: checks if the node has the expected role
--replication-lag
: checks if the node is lagging by more than
replication_lag_warning
or replication_lag_critical
--archive-ready
: checks for WAL files which have not yet been archived,
and returns WARNING
or CRITICAL
if the number
exceeds archive_ready_warning
or archive_ready_critical
respectively.
--downstream
: checks that the expected downstream nodes are attached
--upstream
: checks that the node is attached to its expected upstream
--slots
: checks there are no inactive physical replication slots
--missing-slots
: checks there are no missing physical replication slots
--data-directory-config
: checks the data directory configured in
repmgr.conf
matches the actual data directory.
This check is not directly related to replication, but is useful to verify repmgr
is correctly configured.
Several checks are provided for diagnostic purposes and are not included in the general output:
--db-connection
: checks if repmgr can connect to the
database on the local node.
This option is particularly useful in combination with SSH
, as
it can be used to troubleshoot connection issues encountered when repmgr is
executed remotely (e.g. during a switchover operation).
--replication-config-owner
: checks if the file containing replication
configuration (PostgreSQL 12 and later: postgresql.auto.conf
;
PostgreSQL 11 and earlier: recovery.conf
) is
owned by the same user who owns the data directory.
Incorrect ownership of these files (e.g. if they are owned by root
)
will cause operations which need to update the replication configuration
(e.g. repmgr standby follow
or repmgr standby promote
)
to fail.
-S
/--superuser
: connect as the
named superuser instead of the repmgr user
--csv
: generate output in CSV format (not available
for individual checks)
--nagios
: generate output in a Nagios-compatible format
(for individual checks only)
When executing repmgr node check
with one of the individual
checks listed above, repmgr will emit one of the following Nagios-style exit codes
(even if --nagios
is not supplied):
0
: OK
1
: WARNING
2
: ERROR
3
: UNKNOWN
One of the following exit codes will be emitted by repmgr status check
if no individual check was specified.
SUCCESS (0)
No issues were detected.
ERR_NODE_STATUS (25)
One or more issues were detected.