Nodes and clusters store information that can be thought of schema, metadata or topology. Users, vhosts, queues, exchanges, bindings, runtime parameters all fall into this category. This metadata is called definitions in RabbitMQ parlance.
Definitions can be exported to a file and then imported into another cluster or used for schema backup or data seeding.
Definitions are stored in an internal database and replicated across all cluster nodes. Every node in a cluster has its own replica of all definitions. When a part of definitions changes, the update is performed on all nodes in a single transaction. This means that in practice, definitions can be exported from any cluster node with the same result.
VMware RabbitMQ supports Warm Standby Replication to a remote cluster, which makes it easy to run a warm standby cluster for disaster recovery.
Definition import on node boot is the recommended way of pre-configuring nodes at deployment time.
Definitions are exported as a JSON file in a number of ways.
Definitions can be exported for a specific virtual host or the entire cluster (all virtual host). When definitions are exported for just one virtual host, some information (contents of the other virtual hosts or users without any permissions to the target virtual host) will be excluded from the exported file.
Exported user data contains password hashes as well as password hashing function information. While brute forcing passwords with hashing functions such as SHA-256 or SHA-512 is not a completely trivial task, user records should be considered sensitive information.
To export definitions using rabbitmqctl, use rabbitmqctl export_definitions:
# Does not require management plugin to be enabled rabbitmqctl export_definitions /path/to/definitions.file.json
rabbitmqadmin export is very similar but uses the HTTP API and is compatible with older versions:
# Requires management plugin to be enabled rabbitmqadmin export /path/to/definitions.file.json
In this example, the GET /api/definitions endpoint is used directly to export definitions of all virtual hosts in a cluster:
# Requires management plugin to be enabled, # placeholders are used for credentials and hostname. # Use HTTPS when possible. curl -u {username}:{password} -X GET http://{hostname}:15672/api/definitions
The response from the above API endpoint can be piped to jq and similar tools for more human-friendly formatting:
# Requires management plugin to be enabled, # placeholders are used for credentials and hostname. # Use HTTPS when possible. # # jq is a 3rd party tool that must be available in PATH curl -u {username}:{password} -X GET http://{hostname}:15672/api/definitions | jq
To import definitions using rabbitmqctl, use rabbitmqctl import_definitions:
# Does not require management plugin to be enabled rabbitmqctl import_definitions /path/to/definitions.file.json
rabbitmqadmin import is its HTTP API equivalent:
# Requires management plugin to be enabled rabbitmqadmin import /path/to/definitions.file.json
It is also possible to use the POST /api/definitions API endpoint directly:
# Requires management plugin to be enabled, # placeholders are used for credentials and hostname. # Use HTTPS when possible. curl -u {username}:{password} -H "Content-Type: application/json" -X POST -T /path/to/definitions.file.json http://{hostname}:15672/api/definitions
A definition file can be imported during or after node startup time. In a multi-node cluster, at-boot-time imports can and in practice will result in repetitive work performed by the nodes on boot. This is of no concern with smaller definition files but with larger files, importing definitions after node boot after cluster deployment (formation) is recommended.
Modern releases support definition import directly in the core, without the need to preconfigure the management plugin.
To import definitions from a local file on node boot, set the load_definitions config key to a path of a previously exported JSON file with definitions:
# Does not require management plugin to be enabled. load_definitions = /path/to/definitions/file.json
Definitions can be imported from a URL accessible over HTTPS on node boot. Set the definitions.import_backend and definitions.https.url config keys to https and a valid URL where a JSON definition is located.
# Does not require management plugin to be enabled. definitions.import_backend = https definitions.https.url = https://raw.githubusercontent.com/rabbitmq/sample-configs/main/queues/5k-queues.json # client-side TLS options for definition import definitions.tls.versions.1 = tlsv1.2
Definition import happens after plugin activation. This means that definitions related to plugins (e.g. dynamic Shovels, exchanges of a custom type, and so on) can be imported at boot time.
The definitions in the file will not overwrite anything already in the broker.
If a blank (uninitialised) node imports a definition file, it will not create the default virtual host and user. In test or QA environments, an equivalent default user can be created via the same definitions file.
For production systems a new user with unique credentials must be created and used instead.
The below snippet demonstrates how the definitions file can be modified to "re-create" the default user that would only be able to connect from localhost by default:
"users": [ { "name": "guest", "password": "guest", "tags": ["administrator"] } ], "permissions":[ { "user":"guest", "vhost":"/", "configure":".*", "read":".*", "write":".*"} ],
By default definitions are imported by every cluster node, unconditionally. In many environments definition file rarely changes. In that case it makes sense to only perform an import when definition file contents actually change.
This can be done by setting the definitions.skip_if_unchanged configuration key to true:
# when set to true, definition import will only happen # if definition file contents change definitions.skip_if_unchanged = true definitions.import_backend = local_filesystem definitions.local.path = /path/to/definitions/defs.json
This feature works for both individual files and directories:
# when set to true, definition import will only happen # if definition file contents change definitions.skip_if_unchanged = true definitions.import_backend = local_filesystem definitions.local.path = /path/to/definitions/conf.d/
It is also supported by the HTTPS endpoint import mechanism:
# when set to true, definition import will only happen # if definition file contents change definitions.skip_if_unchanged = true definitions.import_backend = https definitions.https.url = https://some.endpoint/path/to/rabbitmq.definitions.json definitions.tls.verify = verify_peer definitions.tls.fail_if_no_peer_cert = true definitions.tls.cacertfile = /path/to/ca_certificate.pem definitions.tls.certfile = /path/to/client_certificate.pem definitions.tls.keyfile = /path/to/client_key.pem
Installations that use earlier versions that do not provide the built-in definition import can import definitions immediately after node boot using a combination of two CLI commands:
# await startup for up to 5 minutes rabbitmqctl await_startup --timeout 300 # import definitions using rabbitmqctl rabbitmqctl import_definitions /path/to/definitions.file.json # OR, import using rabbitmqadmin # Requires management plugin to be enabled rabbitmqadmin import /path/to/definitions.file.json
If you have questions about the contents of this guide or any other topic related to RabbitMQ, don't hesitate to ask them on the RabbitMQ mailing list.
If you'd like to contribute an improvement to the site, its source is available on GitHub. Simply fork the repository and submit a pull request. Thank you!