repmgr standby clone

repmgr standby clone — clone a PostgreSQL standby node from another PostgreSQL node

Description

repmgr standby clone clones a PostgreSQL node from another PostgreSQL node, typically the primary, but optionally from any other node in the cluster or from Barman. It creates the recovery.conf file required to attach the cloned node to the primary node (or another standby, if cascading replication is in use).

Note

repmgr standby clone does not start the standby, and after cloning a standby, the command repmgr standby register must be executed to notify repmgr of its existence.

Handling configuration files

Note that by default, all configuration files in the source node's data directory will be copied to the cloned node. Typically these will be postgresql.conf, postgresql.auto.conf, pg_hba.conf and pg_ident.conf. These may require modification before the standby is started.

In some cases (e.g. on Debian or Ubuntu Linux installations), PostgreSQL's configuration files are located outside of the data directory and will not be copied by default. repmgr can copy these files, either to the same location on the standby server (provided appropriate directory and file permissions are available), or into the standby's data directory. This requires passwordless SSH access to the primary server. Add the option --copy-external-config-files to the repmgr standby clone command; by default files will be copied to the same path as on the upstream server. Note that the user executing repmgr must have write access to those directories.

To have the configuration files placed in the standby's data directory, specify --copy-external-config-files=pgdata, but note that any include directives in the copied files may need to be updated.

Note

When executing repmgr standby clone with the --copy-external-config-files aand --dry-run options, repmgr will check the SSH connection to the source node, but will not verify whether the files can actually be copied.

During the actual clone operation, a check will be made before the database itself is cloned to determine whether the files can actually be copied; if any problems are encountered, the clone operation will be aborted, enabling the user to fix any issues before retrying the clone operation.

Tip

For reliable configuration file management we recommend using a configuration management tool such as Ansible, Chef, Puppet or Salt.

Customising recovery.conf

By default, repmgr will create a minimal recovery.conf containing following parameters:

  • standby_mode (always 'on')
  • recovery_target_timeline (always 'latest')
  • primary_conninfo
  • primary_slot_name (if replication slots in use)

The following additional parameters can be specified in repmgr.conf for inclusion in recovery.conf:

  • restore_command
  • archive_cleanup_command
  • recovery_min_apply_delay

Note

We recommend using Barman to manage WAL file archiving. For more details on combining repmgr and Barman, in particular using restore_command to configure Barman as a backup source of WAL files, see Cloning from Barman.

Managing WAL during the cloning process

When initially cloning a standby, you will need to ensure that all required WAL files remain available while the cloning is taking place. To ensure this happens when using the default pg_basebackup method, repmgr will set pg_basebackup's --wal-method parameter to stream, which will ensure all WAL files generated during the cloning process are streamed in parallel with the main backup. Note that this requires two replication connections to be available (repmgr will verify sufficient connections are available before attempting to clone, and this can be checked before performing the clone using the --dry-run option).

To override this behaviour, in repmgr.conf set pg_basebackup's --wal-method parameter to fetch:

      pg_basebackup_options='--wal-method=fetch'

and ensure that wal_keep_segments is set to an appropriately high value. See the pg_basebackup documentation for details.

Note

If using PostgreSQL 9.6 or earlier, replace --wal-method with --xlog-method.

Using a standby cloned by another method

repmgr supports standbys cloned by another method (e.g. using barman's barman recover command).

To integrate the standby as a repmgr node, once the standby has been cloned, ensure the repmgr.conf file is created for the node, and that it has been registered using repmgr standby register. Then execute the command repmgr standby clone --recovery-conf-only. This will create the recovery.conf file needed to attach the node to its upstream, and will also create a replication slot on the upstream node if required.

Note that the upstream node must be running. An existing recovery.conf will not be overwritten unless the -F/--force option is provided.

Execute repmgr standby clone --recovery-conf-only --dry-run to check the prerequisites for creating the recovery.conf file, and display the contents of the file without actually creating it.

Note

--recovery-conf-only was introduced in repmgr 4.0.4.

Options

-d, --dbname=CONNINFO

Connection string of the upstream node to use for cloning.

--dry-run

Check prerequisites but don't actually clone the standby.

If --recovery-conf-only specified, the contents of the generated recovery.conf file will be displayed but the file itself not written.

-c, --fast-checkpoint

Force fast checkpoint (not effective when cloning from Barman).

--copy-external-config-files[={samepath|pgdata}]

Copy configuration files located outside the data directory on the source node to the same path on the standby (default) or to the PostgreSQL data directory.

--no-upstream-connection

When using Barman, do not connect to upstream node.

-R, --remote-user=USERNAME

Remote system username for SSH operations (default: current local system username).

--recovery-conf-only

Create recovery.conf file for a previously cloned instance. repmgr 4.0.4 and later.

--replication-user

User to make replication connections with (optional, not usually required).

--superuser

If the repmgr user is not a superuser, the name of a valid superuser must be provided with this option.

--upstream-conninfo

primary_conninfo value to write in recovery.conf when the intended upstream server does not yet exist.

Note that repmgr may modify the provided value, in particular to set the correct application_name.

--upstream-node-id

ID of the upstream node to replicate from (optional, defaults to primary node)

--without-barman

Do not use Barman even if configured.

Event notifications

A standby_clone event notification will be generated.

See also

See cloning standbys for details about various aspects of cloning.