Trigger GMR for Service cinder Trigger GMR for Nova services Will retrieve SOS reports from nodes crc Generating SOS Report for crc Journal size limit not set or invalid: ignoring Starting pod/crc-debug-xndhg ... To use host binaries, run `chroot /host` Generating SOS Report for EDPM edpm-compute-0 Warning: Permanently added '192.168.122.100' (ED25519) to the list of known hosts. OMC mode: Collecting OLM resources (subscriptions, CSVs, etc.) in OMC format +--------------------------------------+----------------+-------------------------------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +--------------------------------------+----------------+-------------------------------------+----------+---------+-------+----------------------------+ | 58c63eee-7195-4414-af34-360f81302c75 | nova-conductor | nova-cell0-conductor-0 | internal | enabled | up | 2026-01-21T13:58:11.000000 | | 305f1042-ec3b-4358-b8bb-8fc6e97ec5cb | nova-scheduler | nova-scheduler-0 | internal | enabled | up | 2026-01-21T13:58:17.000000 | | 080443ee-10ed-48a6-bb39-538bfaf74af1 | nova-conductor | nova-cell1-conductor-0 | internal | enabled | up | 2026-01-21T13:58:17.000000 | | 7ba4128c-c91d-465c-b645-453afcdd12a2 | nova-compute | edpm-compute-0.ctlplane.example.com | nova | enabled | up | 2026-01-21T13:58:17.000000 | +--------------------------------------+----------------+-------------------------------------+----------+---------+-------+----------------------------+ sos report (version 4.10.1) This command will collect diagnostic and configuration information from this CentOS Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos-osp/sos.j2z1ylsu and may be provided to a CentOS support representative. Any information provided to CentOS will be treated in accordance with the published support policies at: Community Website : https://www.centos.org/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive ... Setting up plugins ... Gathering data for ns/openstack... Wrote inspect data to /must-gather. [plugin:fwupd] skipped command 'fwupdmgr get-approved-firmware': required services missing: fwupd. [plugin:fwupd] skipped command 'fwupdmgr get-devices --no-unreported-check': required services missing: fwupd. [plugin:fwupd] skipped command 'fwupdmgr get-history': required services missing: fwupd. [plugin:fwupd] skipped command 'fwupdmgr get-remotes': required services missing: fwupd. [plugin:fwupd] skipped command '/usr/libexec/fwupd/fwupdagent get-devices': required services missing: fwupd. [plugin:fwupd] skipped command '/usr/libexec/fwupd/fwupdagent get-updates': required services missing: fwupd. Wrote inspect data to /must-gather. Gathering secrets in namespace openstack Wrote inspect data to /must-gather. [plugin:networking] skipped command 'ip -s macsec show': required kmods missing: macsec. Use '--allow-system-changes' to enable collection. [plugin:openstack_neutron] Could not open conf file /etc/neutron/plugins/ml2/ml2_conf.ini: [Errno 2] No such file or directory: '/etc/neutron/plugins/ml2/ml2_conf.ini' Not all environment variables set. Source the environment file for the user intended to connect to the OpenStack environment. [plugin:systemd] skipped command 'systemd-resolve --status': required services missing: systemd-resolved. [plugin:systemd] skipped command 'systemd-resolve --statistics': required services missing: systemd-resolved. Running plugins. Please wait ... Starting 3/78 ata [Running: ata] Starting 4/78 auditd [Running: auditd] Starting 1/78 anaconda [Running: auditd anaconda] Starting 2/78 anacron [Running: auditd anaconda anacron] Starting 5/78 block [Running: auditd anaconda anacron block] Wrote inspect data to /must-gather. Starting 6/78 boot [Running: auditd anaconda block boot] Starting 7/78 buildah [Running: auditd block boot buildah] Starting 8/78 ceph_common [Running: block boot buildah ceph_common] Starting 9/78 cgroups [Running: block boot ceph_common cgroups] Starting 10/78 chrony [Running: block boot cgroups chrony] Starting 11/78 console [Running: block boot cgroups console] Starting 12/78 containers_common [Running: block boot cgroups containers_common] Starting 13/78 coredump [Running: block boot cgroups coredump] Starting 14/78 cron [Running: block boot cgroups cron] Starting 15/78 crypto [Running: block boot cgroups crypto] Starting 16/78 dbus [Running: block boot cgroups dbus] Starting 17/78 devicemapper [Running: block boot cgroups devicemapper] Starting 18/78 devices [Running: block boot cgroups devices] Starting 19/78 dnf [Running: block boot cgroups dnf] Starting 20/78 filesys [Running: block boot dnf filesys] Starting 21/78 firewall_tables [Running: boot dnf filesys firewall_tables] Starting 22/78 fwupd [Running: boot dnf firewall_tables fwupd] Starting 23/78 hardware [Running: boot dnf firewall_tables hardware] Starting 24/78 host [Running: boot dnf hardware host] Starting 25/78 i18n [Running: boot dnf hardware i18n] Starting 26/78 iscsi [Running: boot dnf hardware iscsi] Starting 27/78 kernel [Running: boot dnf hardware kernel] Starting 28/78 keyutils [Running: dnf hardware kernel keyutils] Starting 29/78 krb5 [Running: dnf hardware kernel krb5] Starting 30/78 kvm [Running: dnf hardware kernel kvm] Starting 31/78 ldap [Running: dnf kernel kvm ldap] Starting 32/78 libraries [Running: dnf kernel kvm libraries] Starting 33/78 libvirt [Running: dnf kernel kvm libvirt] Starting 34/78 login [Running: dnf kernel kvm login] Starting 35/78 logrotate [Running: dnf kernel kvm logrotate] Starting 36/78 logs [Running: dnf kernel kvm logs] Starting 37/78 lvm2 [Running: dnf kvm logs lvm2] Starting 38/78 md [Running: dnf kvm logs md] Starting 39/78 memory [Running: dnf kvm logs memory] Starting 40/78 multipath [Running: dnf logs memory multipath] Starting 41/78 networking [Running: dnf logs multipath networking] Starting 42/78 networkmanager [Running: dnf logs networking networkmanager] Starting 43/78 nfs [Running: dnf logs networking nfs] Starting 44/78 numa [Running: dnf logs networking numa] Starting 45/78 nvme [Running: dnf logs networking nvme] Starting 46/78 openhpi [Running: dnf logs networking openhpi] Starting 47/78 openstack_edpm [Running: dnf logs networking openstack_edpm] Starting 48/78 openstack_neutron [Running: dnf logs networking openstack_neutron] Starting 49/78 openstack_nova [Running: dnf logs networking openstack_nova] Starting 50/78 openvswitch [Running: dnf logs networking openvswitch] Starting 51/78 ovn_host [Running: dnf networking openvswitch ovn_host] Starting 52/78 pam [Running: dnf networking openvswitch pam] Starting 53/78 pci [Running: dnf networking openvswitch pci] Starting 54/78 podman [Running: dnf networking openvswitch podman] Starting 55/78 process [Running: dnf networking podman process] Starting 56/78 processor [Running: dnf networking process processor] Starting 57/78 python [Running: dnf process processor python] Starting 58/78 release [Running: dnf process processor release] Starting 59/78 rpm [Running: dnf process processor rpm] Starting 60/78 sar [Running: dnf process processor sar] Starting 61/78 scsi [Running: dnf process processor scsi] Starting 62/78 selinux [Running: dnf process processor selinux] Starting 63/78 services [Running: dnf processor selinux services] Starting 64/78 ssh [Running: dnf processor selinux ssh] Starting 65/78 sudo [Running: dnf processor selinux sudo] Starting 66/78 sunrpc [Running: dnf processor selinux sunrpc] Starting 67/78 system [Running: dnf processor selinux system] Starting 68/78 systemd [Running: processor selinux system systemd] Gathering data for ns/metallb-system... Starting 69/78 sysvipc [Running: processor selinux systemd sysvipc] Starting 70/78 tpm2 [Running: processor selinux systemd tpm2] Starting 71/78 tuned [Running: processor selinux systemd tuned] Starting 72/78 udev [Running: processor systemd tuned udev] Starting 73/78 udisks [Running: processor systemd tuned udisks] Starting 74/78 unbound [Running: processor systemd udisks unbound] Starting 75/78 vhostmd [Running: processor systemd udisks vhostmd] Starting 76/78 virsh [Running: processor systemd udisks virsh] Starting 77/78 xen [Running: processor systemd virsh xen] Finishing plugins [Running: processor systemd virsh] Starting 78/78 xfs [Running: processor systemd virsh xfs] Finishing plugins [Running: processor systemd virsh] Finishing plugins [Running: processor systemd] Finishing plugins [Running: processor] Finished running plugins Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ warning: Container container-00 is unable to start due to an error: Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Wrote inspect data to /must-gather. Creating compressed archive... Your sos report has been generated and saved in: /var/tmp/sos-osp/sosreport-edpm-compute-0-2026-01-21-vhqkamg.tar.xz Size 14.45MiB Owner root sha256 c5a9f7cc057e6cabca343bd58b78ebe8a4abca615116dcb8d3001e4dd20b1dca Please send this file to your support representative. Retrieving SOS Report for edpm-compute-0 Finished retrieving SOS Report for edpm-compute-0 Trying to pull registry.redhat.io/rhel9/support-tools:latest... Getting image source signatures Checking if image destination supports signatures Copying blob sha256:b98104ab0e1239a911fc1ca3c8589101c7fa3eb521b2c4b1fb1120038f55fbe9 Copying blob sha256:34b5c851d9cf523f162ceb72c260f1c6d1e556f8f4422e15258572766f2afc28 Gathering data for ns/openstack-operators... Gathering secrets in namespace openstack-operators Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Copying config sha256:907c6f8a1bbc29560332663e8e9c85244e317088310a19891a847689ebec5226 Writing manifest to image destination Storing signatures 907c6f8a1bbc29560332663e8e9c85244e317088310a19891a847689ebec5226 .toolboxrc file detected, overriding defaults... Checking if there is a newer version of registry.redhat.io/rhel9/support-tools available... Wrote inspect data to /must-gather. Spawning a container 'toolbox-osp' with image 'registry.redhat.io/rhel9/support-tools' Detected RUN label in the container image. Using that as the default... f3eba312263ba18091473d4518fa88b9dd6a32dfedac5b9dff75d7e5eebcf7f7 toolbox-osp sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=/var/tmp/sos-osp; exit Gathering data for ns/baremetal-operator-system... Gathering secrets in namespace baremetal-operator-system Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ [?2004l temporary directory /var/tmp/sos-osp does not exist or is not writable exit Wrote inspect data to /must-gather. Removing debug pod ... Retrieving SOS Report for crc Failed to download sosreport-crc.tar.xz not deleting file Gathering data for ns/openshift-machine-api... Gathering secrets in namespace openshift-machine-api Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/cert-manager... Gathering secrets in namespace cert-manager Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/openshift-nmstate... Gathering secrets in namespace openshift-nmstate Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/openshift-operators... Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Gathering secrets in namespace openshift-operators Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/metallb-system... Gathering secrets in namespace metallb-system Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Gathering data for ns/openshift-marketplace... Gathering secrets in namespace openshift-marketplace Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Gathering data for ns/openshift-operators... Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Gathering secrets in namespace openshift-operators Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/sushy-emulator... Gathering secrets in namespace sushy-emulator Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/baremetal-operator-system... Gathering data for ns/cert-manager... Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Gathering data for ns/openstack-operators... Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Gathering data for ns/metallb-system... Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/openshift-nmstate... Gathering data for ns/cert-manager... Wrote inspect data to /must-gather. Gathering data for ns/openshift-machine-api... Gathering data for ns/openstack-operators... Gathering data for ns/openshift-monitoring... Gathering data for ns/openshift-multus... Wrote inspect data to /must-gather. Gather ctlplane service info: ovn Gather ctlplane service info: openstack Gather ctlplane service info: rabbitmq Copying OVN NB database from ovsdbserver-nb-0 Defaulted container "ovsdbserver-nb" out of: ovsdbserver-nb, openstack-network-exporter Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Copying OVN SB database from ovsdbserver-sb-0 tar: Removing leading `/' from member names Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "ovsdbserver-sb" out of: ovsdbserver-sb, openstack-network-exporter Gather ctlplane service info: keystone Gather ctlplane service info: glance Gather ctlplane service info: cinder tar: Removing leading `/' from member names Gather ctlplane service info: nova Error: unable to perform an operation on node 'rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack'. Please see diagnostics information and suggestions below. Most common reasons for this are: * Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues) * CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server) * Target node is not running In addition to the diagnostics info below: * See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more * Consult server logs on node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack * If target node is configured to use long node names, don't forget to use --longnames with CLI tools DIAGNOSTICS =========== attempted to contact: ['rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack'] rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack: * connected to epmd (port 4369) on rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack * node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack up, 'rabbit' application running Current node details: * node name: 'rabbitmqcli-401-rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack' * effective user's home directory: /var/lib/rabbitmq * Erlang cookie hash: R5yu2JhDbpaKjFJZfweiQg== Error: unable to perform an operation on node 'rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack'. Please see diagnostics information and suggestions below. Most common reasons for this are: * Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues) * CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server) * Target node is not running In addition to the diagnostics info below: * See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more * Consult server logs on node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack * If target node is configured to use long node names, don't forget to use --longnames with CLI tools DIAGNOSTICS =========== attempted to contact: ['rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack'] rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack: * connected to epmd (port 4369) on rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack * node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack up, 'rabbit' application running Current node details: * node name: 'rabbitmqcli-492-rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack' * effective user's home directory: /var/lib/rabbitmq * Erlang cookie hash: R5yu2JhDbpaKjFJZfweiQg== Error: unable to perform an operation on node 'rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack'. Please see diagnostics information and suggestions below. Most common reasons for this are: * Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues) * CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server) * Target node is not running In addition to the diagnostics info below: * See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more * Consult server logs on node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack * If target node is configured to use long node names, don't forget to use --longnames with CLI tools DIAGNOSTICS =========== attempted to contact: ['rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack'] rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack: * connected to epmd (port 4369) on rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack * node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack up, 'rabbit' application running Current node details: * node name: 'rabbitmqcli-313-rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack' * effective user's home directory: /var/lib/rabbitmq * Erlang cookie hash: R5yu2JhDbpaKjFJZfweiQg== command terminated with exit code 69 command terminated with exit code 69 command terminated with exit code 69 Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Gather ctlplane service info: placement Gather ctlplane service info: neutron Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Gather ctlplane service info: swift Gather ctlplane service info: barbican Gather ctlplane service info: ceilometer Extensions list not supported by Identity API tar: Removing leading `/' from member names The /must-gather/must-gather.tar.xz now can be attached to the support case.