repmgr standby clone — clone a PostgreSQL standby node from another PostgreSQL node
repmgr standby clone
clones a PostgreSQL node from another
PostgreSQL node, typically the primary, but optionally from any other node in
the cluster or from Barman. It creates the replication configuration required
to attach the cloned node to the primary node (or another standby, if cascading replication
is in use).
repmgr standby clone
does not start the standby, and after cloning
a standby, the command repmgr standby register
must be executed to
notify repmgr of its existence.
Note that by default, all configuration files in the source node's data
directory will be copied to the cloned node. Typically these will be
postgresql.conf
, postgresql.auto.conf
,
pg_hba.conf
and pg_ident.conf
.
These may require modification before the standby is started.
In some cases (e.g. on Debian or Ubuntu Linux installations), PostgreSQL's
configuration files are located outside of the data directory and will
not be copied by default. repmgr can copy these files, either to the same
location on the standby server (provided appropriate directory and file permissions
are available), or into the standby's data directory. This requires passwordless
SSH access to the primary server. Add the option --copy-external-config-files
to the repmgr standby clone
command; by default files will be copied to
the same path as on the upstream server. Note that the user executing repmgr
must have write access to those directories.
To have the configuration files placed in the standby's data directory, specify
--copy-external-config-files=pgdata
, but note that
any include directives in the copied files may need to be updated.
When executing repmgr standby clone
with the
--copy-external-config-files
aand --dry-run
options, repmgr will check the SSH connection to the source node, but
will not verify whether the files can actually be copied.
During the actual clone operation, a check will be made before the database itself is cloned to determine whether the files can actually be copied; if any problems are encountered, the clone operation will be aborted, enabling the user to fix any issues before retrying the clone operation.
For reliable configuration file management we recommend using a configuration management tool such as Ansible, Chef, Puppet or Salt.
By default, repmgr will create a minimal replication configuration containing following parameters:
standby_mode
(always 'on'
)recovery_target_timeline
(always 'latest'
)primary_conninfo
primary_slot_name
(if replication slots in use)
The following additional parameters can be specified in repmgr.conf
for inclusion in the replication configuration:
restore_command
archive_cleanup_command
recovery_min_apply_delay
We recommend using Barman to manage
WAL file archiving. For more details on combining repmgr and Barman,
in particular using restore_command
to configure Barman as a backup source of
WAL files, see Cloning from Barman.
When initially cloning a standby, you will need to ensure
that all required WAL files remain available while the cloning is taking
place. To ensure this happens when using the default pg_basebackup
method,
repmgr will set pg_basebackup
's --wal-method
parameter to stream
,
which will ensure all WAL files generated during the cloning process are
streamed in parallel with the main backup. Note that this requires two
replication connections to be available (repmgr will verify sufficient
connections are available before attempting to clone, and this can be checked
before performing the clone using the --dry-run
option).
To override this behaviour, in repmgr.conf
set
pg_basebackup
's --wal-method
parameter to fetch
:
pg_basebackup_options='--wal-method=fetch'
and ensure that wal_keep_segments
is set to an appropriately high value.
See the
pg_basebackup documentation for details.
If using PostgreSQL 9.6 or earlier, replace --wal-method
with --xlog-method
.
repmgr supports standbys cloned by another method (e.g. using barman's
barman recover
command).
To integrate the standby as a repmgr node, once the standby has been cloned,
ensure the repmgr.conf
file is created for the node, and that it has been registered using
repmgr standby register
.
To register a standby which is not running, execute repmgr standby register --force and provide the connection details for the primary.
See Registering an inactive node for more details.
Then execute the command repmgr standby clone --recovery-conf-only
.
This will create the recovery.conf
file needed to attach
the node to its upstream (in PostgreSQL 12 and later: append replication configuration
to postgresql.auto.conf
), and will also create a replication slot on the
upstream node if required.
Note that the upstream node must be running. In PostgreSQL 11 and earlier, an existing
recovery.conf
will not be overwritten unless the
-F/--force
option is provided.
Execute repmgr standby clone --recovery-conf-only --dry-run
to check the prerequisites for creating the recovery configuration,
and display the contents of the configuration which would be added without actually
making any changes.
-d, --dbname=CONNINFO
Connection string of the upstream node to use for cloning.
--dry-run
Check prerequisites but don't actually clone the standby.
If --recovery-conf-only
specified, the contents of
the generated recovery configuration will be displayed
but not written.
-c, --fast-checkpoint
Force fast checkpoint (not effective when cloning from Barman).
--copy-external-config-files[={samepath|pgdata}]
Copy configuration files located outside the data directory on the source node to the same path on the standby (default) or to the PostgreSQL data directory.
--no-upstream-connection
When using Barman, do not connect to upstream node.
-R, --remote-user=USERNAME
Remote system username for SSH operations (default: current local system username).
--recovery-conf-only
Create recovery configuration for a previously cloned instance.
In PostgreSQL 11 and earlier, the replication configuration will be
written to recovery.conf
.
In PostgreSQL 12 and later, the replication configuration will be
written to postgresql.auto.conf
.
--replication-user
User to make replication connections with (optional, not usually required).
--superuser
If the repmgr user is not a superuser, the name of a valid superuser must be provided with this option.
--upstream-conninfo
primary_conninfo
value to include in the recovery configuration
when the intended upstream server does not yet exist.
Note that repmgr may modify the provided value, in particular to set the
correct application_name
.
--upstream-node-id
ID of the upstream node to replicate from (optional, defaults to primary node)
--without-barman
Do not use Barman even if configured.
A standby_clone
event notification will be generated.
See cloning standbys for details about various aspects of cloning.