Package: puppet-module-puppetlabs-rabbitmq / 8.5.0-10

Metadata

Package Version Patches format
puppet-module-puppetlabs-rabbitmq 8.5.0-10 3.0 (quilt)

Patch series

view the series file
Patch File delta Description
do not download rabbitmqadmin in debian.patch | (download)

lib/puppet/provider/rabbitmq_cli.rb | 2 1 + 1 - 0 !
manifests/init.pp | 12 9 + 3 - 0 !
2 files changed, 10 insertions(+), 4 deletions(-)

 do not download rabbitmqadmin in debian
correctly report rabbitmq version.patch | (download)

lib/puppet/provider/rabbitmq_cli.rb | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 correctly report rabbitmq version
 The output of "rabbitmqctl -q status" has changed between Buster and
 Bullseye, confusing this puppet module. This fixes the problem. As the
 version is now correctly detected, this puppet module can now correctly
 use the --no-table-headers parameters for querying Rabbit.
add minus q when calling rabbitmq plugin list.patch | (download)

lib/puppet/provider/rabbitmq_plugin/rabbitmqplugins.rb | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 add -q when calling rabbitmq-plugins list
 The -q is needed in the Bullseye version to avoid the hearder.
increase rabbitmq timeout.patch | (download)

lib/puppet/provider/rabbitmq_cli.rb | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 increase rabbitmq cli timeout
 The new version of rabbitmq is taking much longer to reply, which often
 goes below the timeout of 10 seconds, resulting in the rabbitmq puppet
 providers to just fail. Increasing to 20 seconds just fixes it for me.
trixie support.patch | (download)

manifests/init.pp | 4 2 + 2 - 0 !
1 file changed, 2 insertions(+), 2 deletions(-)

 trixie support
fix list_users provider.patch | (download)

lib/puppet/provider/rabbitmq_user/rabbitmqctl.rb | 22 13 + 9 - 0 !
1 file changed, 13 insertions(+), 9 deletions(-)

 fix list_users provider
setup all nodes as disk nodes.patch | (download)

templates/rabbitmq.config.erb | 1 1 + 0 - 0 !
1 file changed, 1 insertion(+)

 setup all nodes as disk nodes
 In most production setups, it is advised to set all nodes as disk nodes
 bacause of:
  * Simplicity  all nodes behave the same; no special recovery procedures.
  * Resilience  if one disk node fails, you dont risk being left with
    only RAM nodes (which could cause data loss if all RAM nodes restart).
  * Operational flexibility  you can remove or add nodes without worrying
    about last disk node constraints.
  * Modern RabbitMQ performance  RAM nodes used to help in early versions
    (pre-3.x) when Mnesia disk I/O was slow, but with current disk and SSD