Trigger GMR for Service manila Trigger GMR for Service cinder Trigger GMR for Nova services Will retrieve SOS reports from nodes crc Generating SOS Report for crc Journal size limit not set or invalid: ignoring Generating SOS Report for EDPM compute-0 Generating SOS Report for EDPM compute-1 Generating SOS Report for EDPM compute-2 Starting pod/crc-debug-hk9ql ... To use host binaries, run `chroot /host` Warning: Permanently added '192.168.122.100' (ED25519) to the list of known hosts. Warning: Permanently added '192.168.122.101' (ED25519) to the list of known hosts. Warning: Permanently added '192.168.122.102' (ED25519) to the list of known hosts. +--------------------------------------+----------------+--------------------------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +--------------------------------------+----------------+--------------------------------+----------+---------+-------+----------------------------+ | e7529622-a9ef-4996-a2cb-05f1150673b5 | nova-conductor | nova-cell0-conductor-0 | internal | enabled | up | 2026-01-22T10:13:47.000000 | | cc338236-6a99-4151-a875-82d6aaf4cf5d | nova-scheduler | nova-scheduler-0 | internal | enabled | up | 2026-01-22T10:13:54.000000 | | e9dcd746-04ba-4b40-868c-b6c8abe497fa | nova-conductor | nova-cell1-conductor-0 | internal | enabled | up | 2026-01-22T10:13:47.000000 | | c0eadc75-a3f1-4d95-aba0-c4c4ae53a0f5 | nova-compute | compute-1.ctlplane.example.com | nova | enabled | up | 2026-01-22T10:13:54.000000 | | 1be2a3a2-422e-4aca-9903-04aa4a6caaa4 | nova-compute | compute-0.ctlplane.example.com | nova | enabled | up | 2026-01-22T10:13:54.000000 | | 2fe6b411-cdb7-47d1-94e4-aaf2baf2d13f | nova-compute | compute-2.ctlplane.example.com | nova | enabled | up | 2026-01-22T10:13:54.000000 | +--------------------------------------+----------------+--------------------------------+----------+---------+-------+----------------------------+ sos report (version 4.10.1) sos report (version 4.10.1) OMC mode: Collecting OLM resources (subscriptions, CSVs, etc.) in OMC format sos report (version 4.10.1) This command will collect diagnostic and configuration information from this CentOS Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos-osp/sos.ci6qotu5 and may be provided to a CentOS support representative. Any information provided to CentOS will be treated in accordance with the published support policies at: Community Website : https://www.centos.org/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive ... Setting up plugins ... This command will collect diagnostic and configuration information from this CentOS Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos-osp/sos._ergq9gd and may be provided to a CentOS support representative. Any information provided to CentOS will be treated in accordance with the published support policies at: Community Website : https://www.centos.org/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive ... Setting up plugins ... This command will collect diagnostic and configuration information from this CentOS Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos-osp/sos._e710vyu and may be provided to a CentOS support representative. Any information provided to CentOS will be treated in accordance with the published support policies at: Community Website : https://www.centos.org/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive ... Setting up plugins ... Gathering data for ns/openstack... toolbox-osp Trying to pull registry.redhat.io/rhel9/support-tools:latest... Getting image source signatures Checking if image destination supports signatures Copying blob sha256:b98104ab0e1239a911fc1ca3c8589101c7fa3eb521b2c4b1fb1120038f55fbe9 Copying blob sha256:34b5c851d9cf523f162ceb72c260f1c6d1e556f8f4422e15258572766f2afc28 Copying config sha256:907c6f8a1bbc29560332663e8e9c85244e317088310a19891a847689ebec5226 Writing manifest to image destination Storing signatures 907c6f8a1bbc29560332663e8e9c85244e317088310a19891a847689ebec5226 .toolboxrc file detected, overriding defaults... Checking if there is a newer version of registry.redhat.io/rhel9/support-tools available... [plugin:networking] skipped command 'ip -s macsec show': required kmods missing: macsec. Use '--allow-system-changes' to enable collection. [plugin:networking] skipped command 'ss -peaonmi': required kmods missing: xsk_diag. Use '--allow-system-changes' to enable collection. [plugin:networking] skipped command 'ip -s macsec show': required kmods missing: macsec. Use '--allow-system-changes' to enable collection. [plugin:networking] skipped command 'ss -peaonmi': required kmods missing: xsk_diag. Use '--allow-system-changes' to enable collection. [plugin:openstack_neutron] Could not open conf file /etc/neutron/plugins/ml2/ml2_conf.ini: [Errno 2] No such file or directory: '/etc/neutron/plugins/ml2/ml2_conf.ini' Not all environment variables set. Source the environment file for the user intended to connect to the OpenStack environment. [plugin:openstack_neutron] Could not open conf file /etc/neutron/plugins/ml2/ml2_conf.ini: [Errno 2] No such file or directory: '/etc/neutron/plugins/ml2/ml2_conf.ini' Not all environment variables set. Source the environment file for the user intended to connect to the OpenStack environment. Wrote inspect data to /must-gather. [plugin:networking] skipped command 'ip -s macsec show': required kmods missing: macsec. Use '--allow-system-changes' to enable collection. [plugin:networking] skipped command 'ss -peaonmi': required kmods missing: xsk_diag. Use '--allow-system-changes' to enable collection. [plugin:openstack_neutron] Could not open conf file /etc/neutron/plugins/ml2/ml2_conf.ini: [Errno 2] No such file or directory: '/etc/neutron/plugins/ml2/ml2_conf.ini' Not all environment variables set. Source the environment file for the user intended to connect to the OpenStack environment. [plugin:systemd] skipped command 'systemd-resolve --status': required services missing: systemd-resolved. [plugin:systemd] skipped command 'systemd-resolve --statistics': required services missing: systemd-resolved. [plugin:systemd] skipped command 'systemd-resolve --status': required services missing: systemd-resolved. [plugin:systemd] skipped command 'systemd-resolve --statistics': required services missing: systemd-resolved. Running plugins. Please wait ... Starting 2/82 anacron [Running: anacron] Starting 1/82 anaconda [Running: anacron anaconda] Starting 3/82 ata [Running: anaconda ata] Starting 4/82 auditd [Running: anaconda auditd] Starting 5/82 block [Running: anaconda auditd block] Starting 6/82 boot [Running: anaconda auditd block boot] Running plugins. Please wait ... Starting 4/82 auditd [Running: auditd] Starting 2/82 anacron [Running: auditd anacron] Starting 3/82 ata [Running: auditd anacron ata] Starting 1/82 anaconda [Running: auditd anacron anaconda] Starting 5/82 block [Running: auditd anacron anaconda block] Starting 6/82 boot [Running: auditd anaconda block boot] [plugin:systemd] skipped command 'systemd-resolve --status': required services missing: systemd-resolved. [plugin:systemd] skipped command 'systemd-resolve --statistics': required services missing: systemd-resolved. Starting 7/82 buildah [Running: auditd block boot buildah] Starting 7/82 buildah [Running: auditd block boot buildah] Starting 8/82 ceph_common [Running: block boot buildah ceph_common] Starting 8/82 ceph_common [Running: block boot buildah ceph_common] Starting 9/82 ceph_mds [Running: block boot buildah ceph_mds] Starting 9/82 ceph_mds [Running: block boot buildah ceph_mds] Running plugins. Please wait ... Starting 3/82 ata [Running: ata] Starting 4/82 auditd [Running: auditd] Starting 1/82 anaconda [Running: auditd anaconda] Starting 2/82 anacron [Running: auditd anaconda anacron] Starting 5/82 block [Running: auditd anaconda anacron block] Starting 6/82 boot [Running: auditd anaconda block boot] Starting 7/82 buildah [Running: auditd block boot buildah] Starting 8/82 ceph_common [Running: block boot buildah ceph_common] Starting 9/82 ceph_mds [Running: block boot buildah ceph_mds] Starting 10/82 ceph_mgr [Running: block boot ceph_mds ceph_mgr] Starting 10/82 ceph_mgr [Running: block boot ceph_mds ceph_mgr] Starting 11/82 ceph_mon [Running: boot ceph_mds ceph_mgr ceph_mon] Starting 11/82 ceph_mon [Running: boot ceph_mds ceph_mgr ceph_mon] Starting 10/82 ceph_mgr [Running: block boot ceph_mds ceph_mgr] Spawning a container 'toolbox-osp' with image 'registry.redhat.io/rhel9/support-tools' Detected RUN label in the container image. Using that as the default... d19e45be89fe0c54fb460321049cd48f0e8310d392718ce99ec7da9811c72851 Starting 11/82 ceph_mon [Running: boot ceph_mds ceph_mgr ceph_mon] toolbox-osp sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=/var/tmp/sos-osp; exit Starting 12/82 ceph_osd [Running: boot ceph_mgr ceph_mon ceph_osd] Starting 12/82 ceph_osd [Running: boot ceph_mgr ceph_mon ceph_osd] Starting 12/82 ceph_osd [Running: boot ceph_mgr ceph_mon ceph_osd] [?2004l temporary directory /var/tmp/sos-osp does not exist or is not writable exit tar: Removing leading `/' from member names tar: /var/log/pods/*/*.log.*: Warning: Cannot stat: No such file or directory Starting 13/82 ceph_rgw [Running: ceph_mgr ceph_mon ceph_osd ceph_rgw] Starting 13/82 ceph_rgw [Running: ceph_mgr ceph_mon ceph_osd ceph_rgw] Starting 13/82 ceph_rgw [Running: ceph_mgr ceph_mon ceph_osd ceph_rgw] Starting 14/82 cgroups [Running: ceph_mgr ceph_mon ceph_osd cgroups] Starting 14/82 cgroups [Running: ceph_mgr ceph_mon ceph_osd cgroups] Starting 15/82 chrony [Running: ceph_mgr ceph_mon cgroups chrony] Starting 14/82 cgroups [Running: ceph_mgr ceph_mon ceph_osd cgroups] Starting 16/82 console [Running: ceph_mgr ceph_mon cgroups console] Starting 15/82 chrony [Running: ceph_mgr ceph_mon cgroups chrony] Starting 17/82 containers_common [Running: ceph_mgr ceph_mon cgroups containers_common] Starting 18/82 coredump [Running: ceph_mgr ceph_mon cgroups coredump] Starting 19/82 cron [Running: ceph_mgr ceph_mon cgroups cron] Starting 15/82 chrony [Running: ceph_mgr ceph_mon cgroups chrony] Starting 16/82 console [Running: ceph_mgr ceph_mon cgroups console] Starting 20/82 crypto [Running: ceph_mgr ceph_mon cgroups crypto] Starting 16/82 console [Running: ceph_mgr ceph_mon cgroups console] Starting 17/82 containers_common [Running: ceph_mgr ceph_mon cgroups containers_common] Starting 18/82 coredump [Running: ceph_mgr ceph_mon cgroups coredump] Starting 19/82 cron [Running: ceph_mgr ceph_mon cgroups cron] Starting 21/82 dbus [Running: ceph_mgr ceph_mon cgroups dbus] Starting 20/82 crypto [Running: ceph_mgr ceph_mon cgroups crypto] Starting 22/82 devicemapper [Running: ceph_mgr ceph_mon cgroups devicemapper] Starting 17/82 containers_common [Running: ceph_mgr ceph_mon cgroups containers_common] Starting 18/82 coredump [Running: ceph_mgr ceph_mon cgroups coredump] Starting 19/82 cron [Running: ceph_mgr ceph_mon cgroups cron] Starting 21/82 dbus [Running: ceph_mgr ceph_mon cgroups dbus] Starting 23/82 devices [Running: ceph_mgr ceph_mon cgroups devices] Starting 20/82 crypto [Running: ceph_mgr ceph_mon cgroups crypto] Starting 22/82 devicemapper [Running: ceph_mgr ceph_mon cgroups devicemapper] Starting 24/82 dnf [Running: ceph_mgr ceph_mon cgroups dnf] Starting 21/82 dbus [Running: ceph_mgr ceph_mon cgroups dbus] Starting 23/82 devices [Running: ceph_mgr ceph_mon cgroups devices] Starting 22/82 devicemapper [Running: ceph_mgr ceph_mon cgroups devicemapper] Starting 24/82 dnf [Running: ceph_mgr ceph_mon cgroups dnf] Starting 23/82 devices [Running: ceph_mgr ceph_mon cgroups devices] Starting 24/82 dnf [Running: ceph_mgr ceph_mon cgroups dnf] Starting 25/82 filesys [Running: ceph_mgr ceph_mon dnf filesys] Starting 26/82 firewall_tables [Running: ceph_mgr ceph_mon dnf firewall_tables] Starting 27/82 hardware [Running: ceph_mgr ceph_mon dnf hardware] Starting 25/82 filesys [Running: ceph_mgr ceph_mon dnf filesys] Starting 25/82 filesys [Running: ceph_mgr ceph_mon dnf filesys] Starting 26/82 firewall_tables [Running: ceph_mgr ceph_mon dnf firewall_tables] Starting 27/82 hardware [Running: ceph_mgr ceph_mon dnf hardware] Starting 26/82 firewall_tables [Running: ceph_mgr ceph_mon dnf firewall_tables] Starting 27/82 hardware [Running: ceph_mgr ceph_mon dnf hardware] Starting 28/82 host [Running: ceph_mgr ceph_mon dnf host] Starting 29/82 i18n [Running: ceph_mgr ceph_mon dnf i18n] Starting 30/82 iscsi [Running: ceph_mgr ceph_mon dnf iscsi] Starting 31/82 kdump [Running: ceph_mgr ceph_mon dnf kdump] Starting 28/82 host [Running: ceph_mgr ceph_mon dnf host] Starting 28/82 host [Running: ceph_mgr ceph_mon dnf host] Starting 29/82 i18n [Running: ceph_mgr ceph_mon dnf i18n] Starting 30/82 iscsi [Running: ceph_mgr ceph_mon dnf iscsi] Starting 31/82 kdump [Running: ceph_mgr ceph_mon dnf kdump] Starting 29/82 i18n [Running: ceph_mgr ceph_mon dnf i18n] Starting 30/82 iscsi [Running: ceph_mgr ceph_mon dnf iscsi] Starting 31/82 kdump [Running: ceph_mgr ceph_mon dnf kdump] Starting 32/82 kernel [Running: ceph_mgr ceph_mon dnf kernel] Starting 32/82 kernel [Running: ceph_mgr ceph_mon dnf kernel] Starting 32/82 kernel [Running: ceph_mgr ceph_mon dnf kernel] Starting 33/82 keyutils [Running: ceph_mgr ceph_mon dnf keyutils] Starting 34/82 krb5 [Running: ceph_mgr ceph_mon dnf krb5] Starting 35/82 kvm [Running: ceph_mgr ceph_mon dnf kvm] Starting 36/82 ldap [Running: ceph_mon dnf kvm ldap] Starting 37/82 libraries [Running: ceph_mon dnf kvm libraries] Starting 38/82 libvirt [Running: ceph_mon dnf kvm libvirt] Starting 39/82 login [Running: ceph_mon dnf kvm login] Starting 33/82 keyutils [Running: ceph_mon dnf kernel keyutils] Starting 34/82 krb5 [Running: ceph_mon dnf kernel krb5] Starting 35/82 kvm [Running: ceph_mon dnf kernel kvm] Starting 40/82 logrotate [Running: ceph_mon dnf kvm logrotate] Starting 41/82 logs [Running: ceph_mon dnf kvm logs] Starting 33/82 keyutils [Running: ceph_mon dnf kernel keyutils] Starting 34/82 krb5 [Running: ceph_mon dnf kernel krb5] Starting 35/82 kvm [Running: ceph_mon dnf kernel kvm] Starting 36/82 ldap [Running: ceph_mon dnf kvm ldap] Starting 37/82 libraries [Running: ceph_mon dnf kvm libraries] Starting 38/82 libvirt [Running: ceph_mon dnf kvm libvirt] Starting 39/82 login [Running: ceph_mon dnf kvm login] Starting 36/82 ldap [Running: ceph_mon dnf kvm ldap] Starting 37/82 libraries [Running: ceph_mon dnf kvm libraries] Starting 38/82 libvirt [Running: ceph_mon dnf kvm libvirt] Starting 39/82 login [Running: ceph_mon dnf kvm login] Starting 40/82 logrotate [Running: ceph_mon dnf kvm logrotate] Starting 41/82 logs [Running: ceph_mon dnf kvm logs] Starting 40/82 logrotate [Running: ceph_mon dnf kvm logrotate] Starting 41/82 logs [Running: ceph_mon dnf kvm logs] Starting 42/82 lvm2 [Running: ceph_mon kvm logs lvm2] Starting 43/82 md [Running: ceph_mon kvm logs md] Starting 44/82 memory [Running: ceph_mon kvm logs memory] Starting 45/82 multipath [Running: ceph_mon kvm logs multipath] Starting 46/82 networking [Running: ceph_mon kvm logs networking] Starting 42/82 lvm2 [Running: ceph_mon kvm logs lvm2] Starting 47/82 networkmanager [Running: ceph_mon logs networking networkmanager] Starting 42/82 lvm2 [Running: ceph_mon kvm logs lvm2] Starting 43/82 md [Running: ceph_mon kvm logs md] Starting 44/82 memory [Running: ceph_mon kvm logs memory] Starting 43/82 md [Running: ceph_mon kvm logs md] Starting 45/82 multipath [Running: ceph_mon logs memory multipath] Starting 44/82 memory [Running: ceph_mon kvm logs memory] Starting 46/82 networking [Running: ceph_mon logs memory networking] Starting 48/82 nfs [Running: ceph_mon networking networkmanager nfs] Starting 47/82 networkmanager [Running: ceph_mon logs networking networkmanager] Starting 45/82 multipath [Running: ceph_mon kvm logs multipath] Starting 46/82 networking [Running: ceph_mon kvm logs networking] Starting 47/82 networkmanager [Running: ceph_mon logs networking networkmanager] Starting 49/82 numa [Running: ceph_mon networking networkmanager numa] Starting 50/82 nvme [Running: ceph_mon networking networkmanager nvme] Starting 51/82 openhpi [Running: ceph_mon networking networkmanager openhpi] Starting 52/82 openstack_edpm [Running: ceph_mon networking networkmanager openstack_edpm] Starting 53/82 openstack_neutron [Running: ceph_mon networking networkmanager openstack_neutron] Starting 54/82 openstack_nova [Running: ceph_mon networking networkmanager openstack_nova] Starting 55/82 openvswitch [Running: ceph_mon networking networkmanager openvswitch] Gathering data for ns/metallb-system... Starting 48/82 nfs [Running: ceph_mon networking networkmanager nfs] Starting 56/82 ovn_host [Running: ceph_mon networking openvswitch ovn_host] Starting 57/82 pam [Running: ceph_mon networking openvswitch pam] Starting 58/82 pci [Running: ceph_mon networking openvswitch pci] Starting 49/82 numa [Running: ceph_mon networking networkmanager numa] Starting 50/82 nvme [Running: ceph_mon networking networkmanager nvme] Starting 48/82 nfs [Running: ceph_mon networking networkmanager nfs] Starting 49/82 numa [Running: ceph_mon networking nfs numa] Starting 59/82 podman [Running: ceph_mon networking openvswitch podman] Starting 51/82 openhpi [Running: ceph_mon networking networkmanager openhpi] Starting 52/82 openstack_edpm [Running: ceph_mon networking networkmanager openstack_edpm] Starting 53/82 openstack_neutron [Running: ceph_mon networking networkmanager openstack_neutron] Starting 54/82 openstack_nova [Running: ceph_mon networking networkmanager openstack_nova] Starting 55/82 openvswitch [Running: ceph_mon networking networkmanager openvswitch] Starting 50/82 nvme [Running: ceph_mon networking nfs nvme] Starting 51/82 openhpi [Running: ceph_mon networking nfs openhpi] Starting 52/82 openstack_edpm [Running: ceph_mon networking nfs openstack_edpm] Starting 53/82 openstack_neutron [Running: ceph_mon networking nfs openstack_neutron] Starting 54/82 openstack_nova [Running: ceph_mon networking nfs openstack_nova] Starting 55/82 openvswitch [Running: ceph_mon networking nfs openvswitch] Starting 56/82 ovn_host [Running: ceph_mon networking openvswitch ovn_host] Starting 57/82 pam [Running: ceph_mon networking openvswitch pam] Starting 56/82 ovn_host [Running: ceph_mon networking openvswitch ovn_host] Starting 58/82 pci [Running: ceph_mon networking openvswitch pci] Starting 57/82 pam [Running: ceph_mon networking openvswitch pam] Starting 58/82 pci [Running: ceph_mon networking openvswitch pci] Starting 59/82 podman [Running: ceph_mon networking openvswitch podman] Starting 59/82 podman [Running: ceph_mon networking openvswitch podman] Starting 60/82 process [Running: ceph_mon networking podman process] Starting 61/82 processor [Running: ceph_mon networking process processor] Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Starting 60/82 process [Running: ceph_mon networking openvswitch process] Starting 61/82 processor [Running: ceph_mon networking process processor] Starting 60/82 process [Running: ceph_mon networking podman process] Starting 62/82 python [Running: ceph_mon networking processor python] Wrote inspect data to /must-gather. Starting 61/82 processor [Running: ceph_mon networking process processor] tar: Removing leading `/' from hard link targets Starting 63/82 release [Running: ceph_mon networking processor release] Starting 64/82 rpm [Running: ceph_mon networking processor rpm] Wrote inspect data to /must-gather. Starting 65/82 sar [Running: ceph_mon processor rpm sar] Starting 66/82 scsi [Running: ceph_mon processor rpm scsi] Starting 67/82 selinux [Running: ceph_mon processor scsi selinux] Wrote inspect data to /must-gather. Gathering secrets in namespace openstack Starting 68/82 services [Running: ceph_mon processor selinux services] Starting 69/82 ssh [Running: ceph_mon processor selinux ssh] Starting 70/82 sudo [Running: ceph_mon processor selinux sudo] Starting 71/82 sunrpc [Running: ceph_mon processor selinux sunrpc] Starting 72/82 system [Running: ceph_mon processor selinux system] Wrote inspect data to /must-gather. Starting 62/82 python [Running: ceph_mon networking processor python] Starting 62/82 python [Running: ceph_mon process processor python] Starting 63/82 release [Running: ceph_mon processor python release] Starting 64/82 rpm [Running: ceph_mon processor python rpm] Starting 65/82 sar [Running: ceph_mon processor rpm sar] Starting 66/82 scsi [Running: ceph_mon processor rpm scsi] Starting 63/82 release [Running: ceph_mon processor python release] Removing debug pod ... Starting 64/82 rpm [Running: ceph_mon processor release rpm] Starting 65/82 sar [Running: ceph_mon processor rpm sar] Starting 66/82 scsi [Running: ceph_mon processor rpm scsi] Starting 67/82 selinux [Running: ceph_mon processor scsi selinux] Starting 68/82 services [Running: ceph_mon processor selinux services] Starting 67/82 selinux [Running: ceph_mon processor scsi selinux] Starting 69/82 ssh [Running: ceph_mon processor selinux ssh] Starting 70/82 sudo [Running: ceph_mon processor selinux sudo] Starting 71/82 sunrpc [Running: ceph_mon processor selinux sunrpc] Starting 68/82 services [Running: ceph_mon processor selinux services] Starting 72/82 system [Running: ceph_mon processor selinux system] Starting 69/82 ssh [Running: ceph_mon processor selinux ssh] Starting 70/82 sudo [Running: ceph_mon processor selinux sudo] Starting 71/82 sunrpc [Running: ceph_mon processor selinux sunrpc] Starting 72/82 system [Running: ceph_mon processor selinux system] Starting 73/82 systemd [Running: ceph_mon processor selinux systemd] Retrieving SOS Report for crc Starting 73/82 systemd [Running: ceph_mon processor selinux systemd] Starting 73/82 systemd [Running: ceph_mon processor selinux systemd] Starting 74/82 sysvipc [Running: ceph_mon processor systemd sysvipc] Starting 75/82 tpm2 [Running: ceph_mon processor systemd tpm2] Starting 76/82 tuned [Running: ceph_mon processor systemd tuned] Starting pod/crc-debug-cwv6q ... To use host binaries, run `chroot /host` Starting 77/82 udev [Running: ceph_mon processor systemd udev] Starting 78/82 unbound [Running: ceph_mon processor systemd unbound] Starting 79/82 vhostmd [Running: ceph_mon processor systemd vhostmd] Starting 80/82 virsh [Running: ceph_mon processor systemd virsh] Starting 74/82 sysvipc [Running: ceph_mon processor systemd sysvipc] Starting 75/82 tpm2 [Running: ceph_mon processor systemd tpm2] Starting 76/82 tuned [Running: ceph_mon processor systemd tuned] Starting 81/82 xen [Running: ceph_mon processor virsh xen] Finishing plugins [Running: ceph_mon processor virsh] Starting 82/82 xfs [Running: ceph_mon processor virsh xfs] Finishing plugins [Running: ceph_mon processor virsh] Finishing plugins [Running: processor virsh] Starting 74/82 sysvipc [Running: ceph_mon processor systemd sysvipc] Starting 75/82 tpm2 [Running: ceph_mon processor systemd tpm2] Starting 77/82 udev [Running: ceph_mon processor systemd udev] Starting 78/82 unbound [Running: ceph_mon processor systemd unbound] Starting 79/82 vhostmd [Running: ceph_mon processor systemd vhostmd] Starting 80/82 virsh [Running: ceph_mon processor systemd virsh] Starting 76/82 tuned [Running: ceph_mon processor systemd tuned] Starting 81/82 xen [Running: processor systemd virsh xen] Finishing plugins [Running: processor systemd virsh] Starting 82/82 xfs [Running: processor systemd virsh xfs] Removing debug pod ... Finishing plugins [Running: processor systemd virsh] Finishing plugins [Running: processor virsh] Starting 77/82 udev [Running: processor systemd tuned udev] Starting 78/82 unbound [Running: processor systemd tuned unbound] Starting 79/82 vhostmd [Running: processor systemd tuned vhostmd] Starting 80/82 virsh [Running: processor systemd tuned virsh] Starting 81/82 xen [Running: processor systemd virsh xen] Finishing plugins [Running: processor systemd virsh] Starting 82/82 xfs [Running: processor systemd virsh xfs] Finishing plugins [Running: processor systemd virsh] Finishing plugins [Running: processor virsh] Finishing plugins [Running: processor] Finishing plugins [Running: processor] Finishing plugins [Running: processor] Finished running plugins Finished running plugins Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Finished running plugins Wrote inspect data to /must-gather. Gathering data for ns/openstack-operators... Wrote inspect data to /must-gather. Gathering secrets in namespace openstack-operators Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Gathering data for ns/openshift-machine-api... Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Gathering secrets in namespace openshift-machine-api Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Creating compressed archive... Your sos report has been generated and saved in: /var/tmp/sos-osp/sosreport-compute-2-2026-01-22-atjyvqd.tar.xz Size 25.68MiB Owner root sha256 bd34abe9aaa8e3829416aa9f40e43c8686fbbb9b40cdb54c2b73420761265f4c Please send this file to your support representative. Retrieving SOS Report for compute-2 Creating compressed archive... Your sos report has been generated and saved in: /var/tmp/sos-osp/sosreport-compute-1-2026-01-22-rdzvgjc.tar.xz Size 23.89MiB Owner root sha256 c4bb4724927d665c5fc61449fed838a193a73521ebf727ca179906b5bf2a9238 Please send this file to your support representative. Retrieving SOS Report for compute-1 Finished retrieving SOS Report for compute-1 Finished retrieving SOS Report for compute-2 Creating compressed archive... Your sos report has been generated and saved in: /var/tmp/sos-osp/sosreport-compute-0-2026-01-22-kmiewqw.tar.xz Size 28.46MiB Owner root sha256 26254b2b776db1a07363527cee5f9b9edeaa346839e2176ec95c0a71d55d072f Please send this file to your support representative. Retrieving SOS Report for compute-0 Finished retrieving SOS Report for compute-0 Gathering data for ns/cert-manager... Gathering secrets in namespace cert-manager Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ W0122 10:15:12.986460 5729 util.go:195] skipping , failed to read event err: Object 'Kind' is missing in '' Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/openshift-nmstate... Gathering secrets in namespace openshift-nmstate Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/openshift-operators... Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Gathering secrets in namespace openshift-operators Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/metallb-system... Gathering secrets in namespace metallb-system Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Gathering data for ns/openshift-marketplace... Gathering secrets in namespace openshift-marketplace Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Gathering data for ns/openshift-operators... Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Gathering secrets in namespace openshift-operators Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/metallb-system... Gathering data for ns/cert-manager... Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Gathering data for ns/openstack-operators... Wrote inspect data to /must-gather. Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/cert-manager... Gathering data for ns/openshift-machine-api... Wrote inspect data to /must-gather. Wrote inspect data to /must-gather. Gathering data for ns/openstack-operators... Gathering data for ns/openshift-nmstate... Wrote inspect data to /must-gather. Gathering data for ns/openshift-monitoring... Gathering data for ns/openshift-multus... Wrote inspect data to /must-gather. Gather ctlplane service info: openstack Gather ctlplane service info: ovn Gather ctlplane service info: rabbitmq Copying OVN NB database from ovsdbserver-nb-0 Defaulted container "ovsdbserver-nb" out of: ovsdbserver-nb, openstack-network-exporter tar: Removing leading `/' from member names Copying OVN SB database from ovsdbserver-sb-0 Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "ovsdbserver-sb" out of: ovsdbserver-sb, openstack-network-exporter tar: Removing leading `/' from member names Gather ctlplane service info: keystone Gather ctlplane service info: glance Gather ctlplane service info: manila Gather ctlplane service info: cinder Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Gather ctlplane service info: nova Defaulted container "rabbitmq" out of: rabbitmq, setup-container (init) Gather ctlplane service info: placement Gather ctlplane service info: neutron Gather ctlplane service info: swift Gather ctlplane service info: barbican Gather ctlplane service info: ceilometer Extensions list not supported by Identity API tar: Removing leading `/' from member names The /must-gather/must-gather.tar.xz now can be attached to the support case.